Archives AI News

A Brain-to-Population Graph Learning Framework for Diagnosing Brain Disorders

arXiv:2506.16096v2 Announce Type: replace Abstract: Recent developed graph-based methods for diagnosing brain disorders using functional connectivity highly rely on predefined brain atlases, but overlook the rich information embedded within atlases and the confounding effects of site and phenotype variability. To…

Escaping Local Optima in the Waddington Landscape: A Multi-Stage TRPO-PPO Approach for Single-Cell Perturbation Analysis

arXiv:2510.13018v1 Announce Type: new Abstract: Modeling cellular responses to genetic and chemical perturbations remains a central challenge in single-cell biology. Existing data-driven framework have advanced perturbation prediction through variational autoencoders, chemically conditioned autoencoders, and large-scale transformer pretraining. However, these models…

How Well Can Preference Optimization Generalize Under Noisy Feedback?

arXiv:2510.01458v2 Announce Type: replace Abstract: As large language models (LLMs) advance their capabilities, aligning these models with human preferences has become crucial. Preference optimization, which trains models to distinguish between preferred and non-preferred responses based on human feedback, has become…

Clustering with minimum spanning trees: How good can it be?

arXiv:2303.05679v4 Announce Type: replace-cross Abstract: Minimum spanning trees (MSTs) provide a convenient representation of datasets in numerous pattern recognition activities. Moreover, they are relatively fast to compute. In this paper, we quantify the extent to which they are meaningful in…

Information Shapes Koopman Representation

arXiv:2510.13025v1 Announce Type: new Abstract: The Koopman operator provides a powerful framework for modeling dynamical systems and has attracted growing interest from the machine learning community. However, its infinite-dimensional nature makes identifying suitable finite-dimensional subspaces challenging, especially for deep architectures.…

Conditional Distribution Compression via the Kernel Conditional Mean Embedding

arXiv:2504.10139v3 Announce Type: replace-cross Abstract: Existing distribution compression methods, like Kernel Herding (KH), were originally developed for unlabelled data. However, no existing approach directly compresses the conditional distribution of labelled data. To address this gap, we first introduce the Average…

Bayesian Double Descent

arXiv:2507.07338v3 Announce Type: replace-cross Abstract: Double descent is a phenomenon of over-parameterized statistical models such as deep neural networks which have a re-descending property in their risk function. As the complexity of the model increases, risk exhibits a U-shaped region…

Randomness and Interpolation Improve Gradient Descent

arXiv:2510.13040v1 Announce Type: new Abstract: Based on Stochastic Gradient Descent (SGD), the paper introduces two optimizers, named Interpolational Accelerating Gradient Descent (IAGD) as well as Noise-Regularized Stochastic Gradient Descent (NRSGD). IAGD leverages second-order Newton Interpolation to expedite the convergence process…