Don’t Waste Mistakes: Leveraging Negative RL-Groups via Confidence Reweighting
arXiv:2510.08696v1 Announce Type: new Abstract: Reinforcement learning with verifiable rewards (RLVR) has become a standard recipe for improving large language models (LLMs) on reasoning tasks, with Group Relative Policy Optimization (GRPO) widely used in practice. Yet GRPO wastes substantial compute…
