Archives AI News

When Prompts Override Vision: Prompt-Induced Hallucinations in LVLMs

arXiv:2604.21911v1 Announce Type: cross Abstract: Despite impressive progress in capabilities of large vision-language models (LVLMs), these systems remain vulnerable to hallucinations, i.e., outputs that are not grounded in the visual input. Prior work has attributed hallucinations in LVLMs to factors…

When Prompts Override Vision: Prompt-Induced Hallucinations in LVLMs

arXiv:2604.21911v1 Announce Type: cross Abstract: Despite impressive progress in capabilities of large vision-language models (LVLMs), these systems remain vulnerable to hallucinations, i.e., outputs that are not grounded in the visual input. Prior work has attributed hallucinations in LVLMs to factors…

When Prompts Override Vision: Prompt-Induced Hallucinations in LVLMs

arXiv:2604.21911v1 Announce Type: cross Abstract: Despite impressive progress in capabilities of large vision-language models (LVLMs), these systems remain vulnerable to hallucinations, i.e., outputs that are not grounded in the visual input. Prior work has attributed hallucinations in LVLMs to factors…

RIFT: Repurposing Negative Samples via Reward-Informed Fine-Tuning

arXiv:2601.09253v2 Announce Type: replace-cross Abstract: While Supervised Fine-Tuning (SFT) and Rejection Sampling Fine-Tuning (RFT) are standard for LLM alignment, they either rely on costly expert data or discard valuable negative samples, leading to data inefficiency. To address this, we propose…