Archives AI News

Post-Pruning Accuracy Recovery via Data-Free Knowledge Distillation

arXiv:2511.20702v1 Announce Type: new Abstract: Model pruning is a widely adopted technique to reduce the computational complexity and memory footprint of Deep Neural Networks (DNNs). However, global unstructured pruning often leads to significant degradation in accuracy, typically necessitating fine-tuning on…

CHiQPM: Calibrated Hierarchical Interpretable Image Classification

arXiv:2511.20779v1 Announce Type: new Abstract: Globally interpretable models are a promising approach for trustworthy AI in safety-critical domains. Alongside global explanations, detailed local explanations are a crucial complement to effectively support human experts during inference. This work proposes the Calibrated…

TAB-DRW: A DFT-based Robust Watermark for Generative Tabular Data

arXiv:2511.21600v1 Announce Type: cross Abstract: The rise of generative AI has enabled the production of high-fidelity synthetic tabular data across fields such as healthcare, finance, and public policy, raising growing concerns about data provenance and misuse. Watermarking offers a promising…

Physics Steering: Causal Control of Cross-Domain Concepts in a Physics Foundation Model

arXiv:2511.20798v1 Announce Type: new Abstract: Recent advances in mechanistic interpretability have revealed that large language models (LLMs) develop internal representations corresponding not only to concrete entities but also distinct, human-understandable abstract concepts and behaviour. Moreover, these hidden features can be…

Asymmetric Duos: Sidekicks Improve Uncertainty

arXiv:2505.18636v2 Announce Type: replace Abstract: The go-to strategy to apply deep networks in settings where uncertainty informs decisions–ensembling multiple training runs with random initializations–is ill-suited for the extremely large-scale models and practical fine-tuning workflows of today. We introduce a new…

Effects of Initialization Biases on Deep Neural Network Training Dynamics

arXiv:2511.20826v1 Announce Type: new Abstract: Untrained large neural networks, just after random initialization, tend to favour a small subset of classes, assigning high predicted probabilities to these few classes and approximately zero probability to all others. This bias, termed Initial…