Archives AI News

AtomSurf : Surface Representation for Learning on Protein Structures

arXiv:2309.16519v4 Announce Type: replace Abstract: While there has been significant progress in evaluating and comparing different representations for learning on protein data, the role of surface-based learning approaches remains not well-understood. In particular, there is a lack of direct and…

Subliminal Corruption: Mechanisms, Thresholds, and Interpretability

arXiv:2510.19152v1 Announce Type: new Abstract: As machine learning models are increasingly fine-tuned on synthetic data, there is a critical risk of subtle misalignments spreading through interconnected AI systems. This paper investigates subliminal corruption, which we define as undesirable traits are…

Unveiling Transformer Perception by Exploring Input Manifolds

arXiv:2410.06019v2 Announce Type: replace Abstract: This paper introduces a general method for the exploration of equivalence classes in the input space of Transformer models. The proposed approach is based on sound mathematical theory which describes the internal layers of a…

Feature Space Adaptation for Robust Model Fine-Tuning

arXiv:2510.19155v1 Announce Type: new Abstract: Catastrophic forgetting is a common issue in model fine-tuning, especially when the downstream domain contains limited labeled data or differs greatly from the pre-training distribution. Existing parameter-efficient fine-tuning methods operate in the weight space by…

Training-Free Constrained Generation With Stable Diffusion Models

arXiv:2502.05625v4 Announce Type: replace Abstract: Stable diffusion models represent the state-of-the-art in data synthesis across diverse domains and hold transformative potential for applications in science and engineering, e.g., by facilitating the discovery of novel solutions and simulating systems that are…

Kolmogorov-Arnold Attention: Is Learnable Attention Better For Vision Transformers?

arXiv:2503.10632v3 Announce Type: replace Abstract: Kolmogorov-Arnold networks (KANs) are a remarkable innovation that consists of learnable activation functions, with the potential to capture more complex relationships from data. Presently, KANs are deployed by replacing multilayer perceptrons (MLPs) in deep networks,…