Archives AI News

Beyond Pairwise: Empowering LLM Alignment With Ranked Choice Modeling

arXiv:2510.23631v1 Announce Type: new Abstract: Alignment of large language models (LLMs) has predominantly relied on pairwise preference optimization, where annotators select the better of two responses to a prompt. While simple, this approach overlooks the opportunity to learn from richer…

LLMComp: A Language Modeling Paradigm for Error-Bounded Scientific Data Compression

arXiv:2510.23632v1 Announce Type: new Abstract: The rapid growth of high-resolution scientific simulations and observation systems is generating massive spatiotemporal datasets, making efficient, error-bounded compression increasingly important. Meanwhile, decoder-only large language models (LLMs) have demonstrated remarkable capabilities in modeling complex sequential…

Monotone and Separable Set Functions: Characterizations and Neural Models

arXiv:2510.23634v1 Announce Type: new Abstract: Motivated by applications for set containment problems, we consider the following fundamental problem: can we design set-to-vector functions so that the natural partial order on sets is preserved, namely $Ssubseteq T text{ if and only…

DP-LLM: Runtime Model Adaptation with Dynamic Layer-wise Precision Assignment

arXiv:2508.06041v3 Announce Type: replace Abstract: How can we effectively handle queries for on-device large language models (LLMs) with varying runtime constraints, such as latency and accuracy? Multi-scale quantization addresses this challenge by enabling memory-efficient runtime model adaptation of LLMs through…