Archives AI News

SEQR: Secure and Efficient QR-based LoRA Routing

arXiv:2509.18093v1 Announce Type: cross Abstract: Low-Rank Adaptation (LoRA) has become a standard technique for parameter-efficient fine-tuning of large language models, enabling large libraries of LoRAs, each for a specific task or domain. Efficiently selecting the correct LoRA adapter for a…

Improving Deep Tabular Learning

arXiv:2509.16354v1 Announce Type: new Abstract: Tabular data remain a dominant form of real-world information but pose persistent challenges for deep learning due to heterogeneous feature types, lack of natural structure, and limited label-preserving augmentations. As a result, ensemble models based…

Guided Sequence-Structure Generative Modeling for Iterative Antibody Optimization

arXiv:2509.16357v1 Announce Type: new Abstract: Therapeutic antibody candidates often require extensive engineering to improve key functional and developability properties before clinical development. This can be achieved through iterative design, where starting molecules are optimized over several rounds of in vitro…

A geometric framework for momentum-based optimizers for low-rank training

arXiv:2506.17475v2 Announce Type: replace Abstract: Low-rank pre-training and fine-tuning have recently emerged as promising techniques for reducing the computational and storage costs of large neural networks. Training low-rank parameterizations typically relies on conventional optimizers such as heavy ball momentum methods…

EMPEROR: Efficient Moment-Preserving Representation of Distributions

arXiv:2509.16379v1 Announce Type: new Abstract: We introduce EMPEROR (Efficient Moment-Preserving Representation of Distributions), a mathematically rigorous and computationally efficient framework for representing high-dimensional probability measures arising in neural network representations. Unlike heuristic global pooling operations, EMPEROR encodes a feature distribution…

CoUn: Empowering Machine Unlearning via Contrastive Learning

arXiv:2509.16391v1 Announce Type: new Abstract: Machine unlearning (MU) aims to remove the influence of specific “forget” data from a trained model while preserving its knowledge of the remaining “retain” data. Existing MU methods based on label manipulation or model weight…

Federated Learning for Financial Forecasting

arXiv:2509.16393v1 Announce Type: new Abstract: This paper studies Federated Learning (FL) for binary classification of volatile financial market trends. Using a shared Long Short-Term Memory (LSTM) classifier, we compare three scenarios: (i) a centralized model trained on the union of…