Archives AI News

Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator

arXiv:2505.16690v5 Announce Type: replace Abstract: Post-training of large language models is essential for adapting pre-trained language models (PLMs) to align with human preferences and downstream tasks. While PLMs typically exhibit well-calibrated confidence, post-trained language models (PoLMs) often suffer from over-confidence,…

PeriodNet: Boosting the Potential of Attention Mechanism for Time Series Forecasting

arXiv:2511.19497v1 Announce Type: new Abstract: The attention mechanism has demonstrated remarkable potential in sequence modeling, exemplified by its successful application in natural language processing with models such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT). Despite…

SLOFetch: Compressed-Hierarchical Instruction Prefetching for Cloud Microservices

arXiv:2511.04774v3 Announce Type: replace Abstract: Large-scale networked services rely on deep soft-ware stacks and microservice orchestration, which increase instruction footprints and create frontend stalls that inflate tail latency and energy. We revisit instruction prefetching for these cloud workloads and present…

Position: The Complexity of Perfect AI Alignment — Formalizing the RLHF Trilemma

arXiv:2511.19504v1 Announce Type: new Abstract: Reinforcement Learning from Human Feedback (RLHF) is widely used for aligning large language models, yet practitioners face a persistent puzzle: improving safety often reduces fairness, scaling to diverse populations becomes computationally intractable, and making systems…

An Asymptotic Equation Linking WAIC and WBIC in Singular Models

arXiv:2505.13902v3 Announce Type: replace-cross Abstract: In statistical learning, models are classified as regular or singular depending on whether the mapping from parameters to probability distributions is injective. Most models with hierarchical structures or latent variables are singular, for which conventional…