Archives AI News

The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models

arXiv:2512.02265v1 Announce Type: new Abstract: Trustworthy machine learning in healthcare requires strong predictive performance, fairness, and explanations. While it is known that improving fairness can affect predictive performance, little is known about how fairness improvements influence explainability, an essential ingredient…

XXLTraffic: Expanding and Extremely Long Traffic forecasting beyond test adaptation

arXiv:2406.12693v3 Announce Type: replace Abstract: Traffic forecasting is crucial for smart cities and intelligent transportation initiatives, where deep learning has made significant progress in modeling complex spatio-temporal patterns in recent years. However, current public datasets have limitations in reflecting the…

Limitations of Membership Queries in Testable Learning

arXiv:2512.02279v1 Announce Type: new Abstract: Membership queries (MQ) often yield speedups for learning tasks, particularly in the distribution-specific setting. We show that in the emph{testable learning} model of Rubinfeld and Vasilyan [RV23], membership queries cannot decrease the time complexity of…

Machine Unlearning via Information Theoretic Regularization

arXiv:2502.05684v3 Announce Type: replace Abstract: How can we effectively remove or ”unlearn” undesirable information, such as specific features or the influence of individual data points, from a learning outcome while minimizing utility loss and ensuring rigorous guarantees? We introduce a…

Training Dynamics of Learning 3D-Rotational Equivariance

arXiv:2512.02303v1 Announce Type: new Abstract: While data augmentation is widely used to train symmetry-agnostic models, it remains unclear how quickly and effectively they learn to respect symmetries. We investigate this by deriving a principled measure of equivariance error that, for…

Soft-Label Caching and Sharpening for Communication-Efficient Federated Distillation

arXiv:2504.19602v3 Announce Type: replace Abstract: Federated Learning (FL) enables collaborative model training across decentralized clients, enhancing privacy by keeping data local. Yet conventional FL, relying on frequent parameter-sharing, suffers from high communication overhead and limited model heterogeneity. Distillation-based FL approaches…

Matryoshka Model Learning for Improved Elastic Student Models

arXiv:2505.23337v3 Announce Type: replace Abstract: Industry-grade ML models are carefully designed to meet rapidly evolving serving constraints, which requires significant resources for model development. In this paper, we propose MatTA, a framework for training multiple accurate Student models using a…

Retrieval-Augmented Memory for Online Learning

arXiv:2512.02333v1 Announce Type: new Abstract: Retrieval-augmented models couple parametric predictors with non-parametric memories, but their use in streaming supervised learning with concept drift is not well understood. We study online classification in non-stationary environments and propose Retrieval-Augmented Memory for Online…