Mitigating Hallucinations in Large Language Models via Causal Reasoning
arXiv:2508.12495v2 Announce Type: replace-cross Abstract: Large language models (LLMs) exhibit logically inconsistent hallucinations that appear coherent yet violate reasoning principles, with recent research suggesting an inverse relationship between causal reasoning capabilities and such hallucinations. However, existing reasoning approaches in LLMs,…
