Archives AI News

PIBNet: a Physics-Inspired Boundary Network for Multiple Scattering Simulations

arXiv:2512.02049v1 Announce Type: new Abstract: The boundary element method (BEM) provides an efficient numerical framework for solving multiple scattering problems in unbounded homogeneous domains, since it reduces the discretization to the domain boundaries, thereby condensing the computational complexity. The procedure…

Cross-View Topology-Aware Graph Representation Learning

arXiv:2512.02130v1 Announce Type: new Abstract: Graph classification has gained significant attention due to its applications in chemistry, social networks, and bioinformatics. While Graph Neural Networks (GNNs) effectively capture local structural patterns, they often overlook global topological features that are critical…

Efficient Turing Machine Simulation with Transformers

arXiv:2512.00003v2 Announce Type: replace-cross Abstract: Constant bit-size Transformers are known to be Turing complete, but existing constructions require $Omega(s(n))$ chain-of-thought (CoT) steps per simulated Turing machine (TM) step, leading to impractical reasoning lengths. In this paper, we significantly reduce this…

LSHBloom: Memory-efficient, Extreme-scale Document Deduplication

arXiv:2411.04257v3 Announce Type: replace Abstract: Contemporary large language model (LLM) training pipelines require the assembly of internet-scale databases full of text data from a variety of sources (e.g., web, academic, and publishers). Preprocessing these datasets via deduplication — detecting and…

CLEF: Clinically-Guided Contrastive Learning for Electrocardiogram Foundation Models

arXiv:2512.02180v1 Announce Type: new Abstract: The electrocardiogram (ECG) is a key diagnostic tool in cardiovascular health. Single-lead ECG recording is integrated into both clinical-grade and consumer wearables. While self-supervised pretraining of foundation models on unlabeled ECGs improves diagnostic performance, existing…

AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping

arXiv:2505.18738v2 Announce Type: replace Abstract: Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning (PEFT) method validated across NLP and CV domains. However, LoRA faces an inherent low-rank bottleneck: narrowing its performance gap with full finetuning requires increasing the rank…

Enforcing Orderedness to Improve Feature Consistency

arXiv:2512.02194v1 Announce Type: new Abstract: Sparse autoencoders (SAEs) have been widely used for interpretability of neural networks, but their learned features often vary across seeds and hyperparameter settings. We introduce Ordered Sparse Autoencoders (OSAE), which extend Matryoshka SAEs by (1)…