Archives AI News

Federated Multi-Task Clustering

arXiv:2512.22897v2 Announce Type: replace Abstract: Spectral clustering has emerged as one of the most effective clustering algorithms due to its superior performance. However, most existing models are designed for centralized settings, rendering them inapplicable in modern decentralized environments. Moreover, current…

A Survey on Graph Neural Networks for Fraud Detection in Ride Hailing Platforms

arXiv:2512.23777v1 Announce Type: new Abstract: This study investigates fraud detection in ride hailing platforms through Graph Neural Networks (GNNs),focusing on the effectiveness of various models. By analyzing prevalent fraudulent activities, the research highlights and compares the existing work related to…

Myopically Verifiable Probabilistic Certificates for Safe Control and Learning

arXiv:2404.16883v2 Announce Type: replace-cross Abstract: This paper addresses the design of safety certificates for stochastic systems, with a focus on ensuring long-term safety through fast real-time control. In stochastic environments, set invariance-based methods that restrict the probability of risk events…

Zero-Trust Agentic Federated Learning for Secure IIoT Defense Systems

arXiv:2512.23809v1 Announce Type: new Abstract: Recent attacks on critical infrastructure, including the 2021 Oldsmar water treatment breach and 2023 Danish energy sector compromises, highlight urgent security gaps in Industrial IoT (IIoT) deployments. While Federated Learning (FL) enables privacy-preserving collaborative intrusion…

Improved Bounds for Private and Robust Alignment

arXiv:2512.23816v1 Announce Type: new Abstract: In this paper, we study the private and robust alignment of language models from a theoretical perspective by establishing upper bounds on the suboptimality gap in both offline and online settings. We consider preference labels…

DiRe: Diversity-promoting Regularization for Dataset Condensation

arXiv:2512.13083v2 Announce Type: replace-cross Abstract: In Dataset Condensation, the goal is to synthesize a small dataset that replicates the training utility of a large original dataset. Existing condensation methods synthesize datasets with significant redundancy, so there is a dire need…

MS-SSM: A Multi-Scale State Space Model for Efficient Sequence Modeling

arXiv:2512.23824v1 Announce Type: new Abstract: State-space models (SSMs) have recently attention as an efficient alternative to computationally expensive attention-based models for sequence modeling. They rely on linear recurrences to integrate information over time, enabling fast inference, parallelizable training, and control…