Archives AI News

MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling

arXiv:2511.00056v1 Announce Type: new Abstract: The substantial memory demands of pre-training and fine-tuning large language models (LLMs) require memory-efficient optimization algorithms. One promising approach is layer-wise optimization, which treats each transformer block as a single layer and optimizes it sequentially,…

Localist LLMs — A Mathematical Framework for Dynamic Locality Control

arXiv:2510.09338v2 Announce Type: replace-cross Abstract: We present a novel framework for training large language models with continuously adjustable internal representations that span the full spectrum from localist (interpretable, rule-based) to distributed (generalizable, efficient) encodings. The key innovation is a locality…

Automatically Finding Rule-Based Neurons in OthelloGPT

arXiv:2511.00059v1 Announce Type: new Abstract: OthelloGPT, a transformer trained to predict valid moves in Othello, provides an ideal testbed for interpretability research. The model is complex enough to exhibit rich computational patterns, yet grounded in rule-based game logic that enables…

Extremal Contours: Gradient-driven contours for compact visual attribution

arXiv:2511.01411v1 Announce Type: cross Abstract: Faithful yet compact explanations for vision models remain a challenge, as commonly used dense perturbation masks are often fragmented and overfitted, needing careful post-processing. Here, we present a training-free explanation method that replaces dense masks…

EVINGCA: Adaptive Graph Clustering with Evolving Neighborhood Statistics

arXiv:2511.00064v1 Announce Type: new Abstract: Clustering algorithms often rely on restrictive assumptions: K-Means and Gaussian Mixtures presuppose convex, Gaussian-like clusters, while DBSCAN and HDBSCAN capture non-convexity but can be highly sensitive. I introduce EVINGCA (Evolving Variance-Informed Nonparametric Graph Construction Algorithm),…

Probabilistic Robustness for Free? Revisiting Training via a Benchmark

arXiv:2511.01724v1 Announce Type: cross Abstract: Deep learning models are notoriously vulnerable to imperceptible perturbations. Most existing research centers on adversarial robustness (AR), which evaluates models under worst-case scenarios by examining the existence of deterministic adversarial examples (AEs). In contrast, probabilistic…

Scientific Machine Learning with Kolmogorov-Arnold Networks

arXiv:2507.22959v2 Announce Type: replace Abstract: The field of scientific machine learning, which originally utilized multilayer perceptrons (MLPs), is increasingly adopting Kolmogorov-Arnold Networks (KANs) for data encoding. This shift is driven by the limitations of MLPs, including poor interpretability, fixed activation…