Archives AI News

Attention-space Contrastive Guidance for Efficient Hallucination Mitigation in LVLMs

arXiv:2601.13707v2 Announce Type: replace-cross Abstract: Hallucinations in large vision–language models (LVLMs) often arise when language priors dominate over visual evidence, leading to object misidentification and visually inconsistent descriptions. We address this problem by framing hallucination mitigation as contrastive guidance that…

AQPIM: Breaking the PIM Capacity Wall for LLMs with In-Memory Activation Quantization

arXiv:2604.18137v1 Announce Type: cross Abstract: Processing-in-Memory (PIM) architectures offer a promising solution to the memory bottlenecks in data-intensive machine learning, yet often overlook the growing challenge of activation memory footprint. Conventional PIM approaches struggle with massive KV cache sizes generated…

Non-Stationarity in the Embedding Space of Time Series Foundation Models

arXiv:2604.16428v1 Announce Type: new Abstract: Time series foundation models (TSFMs) are widely used as generic feature extractors, yet the notion of non-stationarity in their embedding spaces remains poorly understood. Recent work often conflates non-stationarity with distribution shift, blurring distinctions fundamental…