Think Just Enough: Sequence-Level Entropy as a Confidence Signal for LLM Reasoning
arXiv:2510.08146v2 Announce Type: replace Abstract: We introduce a simple, yet novel entropy-based framework to drive token efficiency in large language models during reasoning tasks. Our approach uses Shannon entropy from token-level logprobs as a confidence signal to enable early stopping,…
