Archives AI News

EqDeepRx: Learning a Scalable MIMO Receiver

arXiv:2602.11834v1 Announce Type: cross Abstract: While machine learning (ML)-based receiver algorithms have received a great deal of attention in the recent literature, they often suffer from poor scaling with increasing spatial multiplexing order and lack of explainability and generalization. This…

Echo: Towards Advanced Audio Comprehension via Audio-Interleaved Reasoning

arXiv:2602.11909v1 Announce Type: cross Abstract: The maturation of Large Audio Language Models (LALMs) has raised growing expectations for them to comprehend complex audio much like humans. Current efforts primarily replicate text-based reasoning by contextualizing audio content through a one-time encoding,…

Divide and Learn: Multi-Objective Combinatorial Optimization at Scale

arXiv:2602.11346v1 Announce Type: new Abstract: Multi-objective combinatorial optimization seeks Pareto-optimal solutions over exponentially large discrete spaces, yet existing methods sacrifice generality, scalability, or theoretical guarantees. We reformulate it as an online learning problem over a decomposed decision space, solving position-wise…

Iskra: A System for Inverse Geometry Processing

arXiv:2602.12105v1 Announce Type: cross Abstract: We propose a system for differentiating through solutions to geometry processing problems. Our system differentiates a broad class of geometric algorithms, exploiting existing fast problem-specific schemes common to geometry processing, including local-global and ADMM solvers.…

AttentionRetriever: Attention Layers are Secretly Long Document Retrievers

arXiv:2602.12278v1 Announce Type: cross Abstract: Retrieval augmented generation (RAG) has been widely adopted to help Large Language Models (LLMs) to process tasks involving long documents. However, existing retrieval models are not designed for long document retrieval and fail to address…

Binary Autoencoder for Mechanistic Interpretability of Large Language Models

arXiv:2509.20997v2 Announce Type: replace Abstract: Existing works are dedicated to untangling atomized numerical components (features) from the hidden states of Large Language Models (LLMs). However, they typically rely on autoencoders constrained by some training-time regularization on single training instances, without…

Scale-Invariant Fast Convergence in Games

arXiv:2602.11857v1 Announce Type: cross Abstract: Scale-invariance in games has recently emerged as a widely valued desirable property. Yet, almost all fast convergence guarantees in learning in games require prior knowledge of the utility scale. To address this, we develop learning…