Researchers from Standford, Princeton, and Cornell have developed a new benchmark to better evaluate coding abilities of large language models (LLMs). Called CodeClash, the new benchmark pits LLMs against each other in multi-round tournaments to assess their capacity to achieve competitive, high-level objectives beyond narrowly defined, task-specific problems. By Sergio De Simone
