HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
arXiv:2506.15065v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly being adopted as the cognitive core of embodied agents. However, inherited hallucinations, which stem from failures to ground user instructions in the observed physical environment, can lead to navigation…
