Archives AI News

EEG-Bench: A Benchmark for EEG Foundation Models in Clinical Applications

arXiv:2512.08959v1 Announce Type: new Abstract: We introduce a unified benchmarking framework focused on evaluating EEG-based foundation models in clinical applications. The benchmark spans 11 well-defined diagnostic tasks across 14 publicly available EEG datasets, including epilepsy, schizophrenia, Parkinson’s disease, OCD, and…

LLM4XCE: Large Language Models for Extremely Large-Scale Massive MIMO Channel Estimation

arXiv:2512.08955v1 Announce Type: new Abstract: Extremely large-scale massive multiple-input multiple-output (XL-MIMO) is a key enabler for sixth-generation (6G) networks, offering massive spatial degrees of freedom. Despite these advantages, the coexistence of near-field and far-field effects in hybrid-field channels presents significant…

Entropy-Informed Weighting Channel Normalizing Flow for Deep Generative Models

arXiv:2407.04958v2 Announce Type: replace Abstract: Normalizing Flows (NFs) are widely used in deep generative models for their exact likelihood estimation and efficient sampling. However, they require substantial memory since the latent space matches the input dimension. Multi-scale architectures address this…

Financial Instruction Following Evaluation (FIFE)

arXiv:2512.08965v1 Announce Type: new Abstract: Language Models (LMs) struggle with complex, interdependent instructions, particularly in high-stakes domains like finance where precision is critical. We introduce FIFE, a novel, high-difficulty benchmark designed to assess LM instruction-following capabilities for financial analysis tasks.…

CluCERT: Certifying LLM Robustness via Clustering-Guided Denoising Smoothing

arXiv:2512.08967v1 Announce Type: new Abstract: Recent advancements in Large Language Models (LLMs) have led to their widespread adoption in daily applications. Despite their impressive capabilities, they remain vulnerable to adversarial attacks, as even minor meaning-preserving changes such as synonym substitutions…

The Impossibility of Inverse Permutation Learning in Transformer Models

arXiv:2509.24125v3 Announce Type: replace Abstract: In this technical note, we study the problem of inverse permutation learning in decoder-only transformers. Given a permutation and a string to which that permutation has been applied, the model is tasked with producing the…