Archives AI News

Federated Learning over Blockchain-Enabled Cloud Infrastructure

arXiv:2604.20062v1 Announce Type: new Abstract: The rise of IoT devices and the uptake of cloud computing have informed a new era of data-driven intelligence. Traditional centralized machine learning models that require a large volume of data to be stored in…

Towards Certified Malware Detection: Provable Guarantees Against Evasion Attacks

arXiv:2604.20495v1 Announce Type: cross Abstract: Machine learning-based static malware detectors remain vulnerable to adversarial evasion techniques, such as metamorphic engine mutations. To address this vulnerability, we propose a certifiably robust malware detection framework based on randomized smoothing through feature ablation…

Maximum Entropy Semi-Supervised Inverse Reinforcement Learning

arXiv:2604.20074v1 Announce Type: new Abstract: A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike its predecessors, it resolves…

Auto-ART: Structured Literature Synthesis and Automated Adversarial Robustness Testing

arXiv:2604.20704v1 Announce Type: cross Abstract: Adversarial robustness evaluation underpins every claim of trustworthy ML deployment, yet the field suffers from fragmented protocols and undetected gradient masking. We make two contributions. (1) Structured synthesis. We analyze nine peer-reviewed corpus sources (2020–2026)…

Analysis of Nystrom method with sequential ridge leverage scores

arXiv:2604.20077v1 Announce Type: new Abstract: Large-scale kernel ridge regression (KRR) is limited by the need to store a large kernel matrix K_t. To avoid storing the entire matrix K_t, Nystrom methods subsample a subset of columns of the kernel matrix,…

On the Quantization Robustness of Diffusion Language Models in Coding Benchmarks

arXiv:2604.20079v1 Announce Type: new Abstract: Auto-regressive Large Language Models (LLMs) achieve strong performance on coding tasks, but incur high memory and inference costs. Diffusion-based language models (d-LLMs) offer bounded inference cost via iterative denoising, but their behavior under post-training quantization…