Your Pre-trained LLM is Secretly an Unsupervised Confidence Calibrator
arXiv:2505.16690v5 Announce Type: replace Abstract: Post-training of large language models is essential for adapting pre-trained language models (PLMs) to align with human preferences and downstream tasks. While PLMs typically exhibit well-calibrated confidence, post-trained language models (PoLMs) often suffer from over-confidence,…
