Archives AI News

Engineering FAIR Privacy-preserving Applications that Learn Histories of Disease

arXiv:2603.00181v1 Announce Type: new Abstract: A recent report on “Learning the natural history of human disease with generative transformers” created an opportunity to assess the engineering challenge of delivering user-facing Generative AI applications in privacy-sensitive domains. The application of these…

GPU-Fuzz: Finding Memory Errors in Deep Learning Frameworks

arXiv:2602.10478v3 Announce Type: replace-cross Abstract: GPU memory errors are a critical threat to deep learning (DL) frameworks, leading to crashes or even security issues. We introduce GPU-Fuzz, a fuzzer locating these issues efficiently by modeling operator parameters as formal constraints.…

OSF: On Pre-training and Scaling of Sleep Foundation Models

arXiv:2603.00190v1 Announce Type: new Abstract: Polysomnography (PSG) provides the gold standard for sleep assessment but suffers from substantial heterogeneity across recording devices and cohorts. There have been growing efforts to build general-purpose foundation models (FMs) for sleep physiology, but lack…

MoMa: A Modular Deep Learning Framework for Material Property Prediction

arXiv:2502.15483v3 Announce Type: replace Abstract: Deep learning methods for material property prediction have been widely explored to advance materials discovery. However, the prevailing pre-train then fine-tune paradigm often fails to address the inherent diversity and disparity of material tasks. To…

Adaptive Confidence Regularization for Multimodal Failure Detection

arXiv:2603.02200v1 Announce Type: cross Abstract: The deployment of multimodal models in high-stakes domains, such as self-driving vehicles and medical diagnostics, demands not only strong predictive performance but also reliable mechanisms for detecting failures. In this work, we address the largely…

TAO: Tolerance-Aware Optimistic Verification for Floating-Point Neural Networks

arXiv:2510.16028v3 Announce Type: replace-cross Abstract: Neural networks increasingly run on hardware outside the user’s control (cloud GPUs, inference marketplaces). Yet ML-as-a-Service reveals little about what actually ran or whether returned outputs faithfully reflect the intended inputs. Users lack recourse against…