Archives AI News

Achieving Logarithmic Regret in KL-Regularized Zero-Sum Markov Games

arXiv:2510.13060v1 Announce Type: new Abstract: Reverse Kullback-Leibler (KL) divergence-based regularization with respect to a fixed reference policy is widely used in modern reinforcement learning to preserve the desired traits of the reference policy and sometimes to promote exploration (using uniform…

Do LLM Agents Have Regret? A Case Study in Online Learning and Games

arXiv:2403.16843v5 Announce Type: replace Abstract: Large language models (LLMs) have been increasingly employed for (interactive) decision-making, via the development of LLM-based autonomous agents. Despite their emerging successes, the performance of LLM agents in decision-making has not been fully investigated through…

NeuroRVQ: Multi-Scale EEG Tokenization for Generative Large Brainwave Models

arXiv:2510.13068v1 Announce Type: new Abstract: Electroencephalography (EEG) captures neural activity across multiple temporal and spectral scales, yielding signals that are rich but complex for representation learning. Recently, EEG foundation models trained to predict masked signal-tokens have shown promise for learning…

Random Scaling for Emergent Capabilities

arXiv:2502.17356v4 Announce Type: replace Abstract: Language models famously improve under a smooth scaling law, but some specific capabilities exhibit sudden breakthroughs in performance. While advocates of “emergence” view breakthroughs as unlocked capabilities, others attribute them to thresholding effects on noncontinuous…

Transformer-based Scalable Beamforming Optimization via Deep Residual Learning

arXiv:2510.13077v1 Announce Type: new Abstract: We develop an unsupervised deep learning framework for downlink beamforming in large-scale MU-MISO channels. The model is trained offline, allowing real-time inference through lightweight feedforward computations in dynamic communication environments. Following the learning-to-optimize (L2O) paradigm,…