Archives AI News

MergeBench: A Benchmark for Merging Domain-Specialized LLMs

arXiv:2505.10833v4 Announce Type: replace Abstract: Model merging provides a scalable alternative to multi-task training by combining specialized finetuned models through parameter arithmetic, enabling efficient deployment without the need for joint training or access to all task data. While recent methods…

Mamaba Can Learn Low-Dimensional Targets In-Context via Test-Time Feature Learning

arXiv:2510.12026v1 Announce Type: new Abstract: Mamba, a recently proposed linear-time sequence model, has attracted significant attention for its computational efficiency and strong empirical performance. However, a rigorous theoretical understanding of its underlying mechanisms remains limited. In this work, we provide…

Offline Fictitious Self-Play for Competitive Games

arXiv:2403.00841v2 Announce Type: replace-cross Abstract: Offline Reinforcement Learning (RL) enables policy improvement from fixed datasets without online interactions, making it highly suitable for real-world applications lacking efficient simulators. Despite its success in the single-agent setting, offline multi-agent RL remains a…

A Generalized Information Bottleneck Theory of Deep Learning

arXiv:2509.26327v2 Announce Type: replace Abstract: The Information Bottleneck (IB) principle offers a compelling theoretical framework to understand how neural networks (NNs) learn. However, its practical utility has been constrained by unresolved theoretical ambiguities and significant challenges in accurate estimation. In…

Evaluating multiple models using labeled and unlabeled data

arXiv:2501.11866v3 Announce Type: replace Abstract: It remains difficult to evaluate machine learning classifiers in the absence of a large, labeled dataset. While labeled data can be prohibitively expensive or impossible to obtain, unlabeled data is plentiful. Here, we introduce Semi-Supervised…

Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator

arXiv:2505.16690v3 Announce Type: replace Abstract: Post-training of large language models is essential for adapting pre-trained language models (PLMs) to align with human preferences and downstream tasks. While PLMs typically exhibit well-calibrated confidence, post-trained language models (PoLMs) often suffer from over-confidence,…