Detecting Data Contamination from Reinforcement Learning Post-training for Large Language Models
arXiv:2510.09259v1 Announce Type: cross Abstract: Data contamination poses a significant threat to the reliable evaluation of Large Language Models (LLMs). This issue arises when benchmark samples may inadvertently appear in training sets, compromising the validity of reported performance. While detection…
