Archives AI News

Superpositional Gradient Descent: Harnessing Quantum Principles for Model Training

arXiv:2511.01918v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly trained with classical optimization techniques like AdamW to improve convergence and generalization. However, the mechanisms by which quantum-inspired methods enhance classical training remain underexplored. We introduce Superpositional Gradient Descent…

The Eigenvalues Entropy as a Classifier Evaluation Measure

arXiv:2511.01904v1 Announce Type: new Abstract: Classification is a machine learning method used in many practical applications: text mining, handwritten character recognition, face recognition, pattern classification, scene labeling, computer vision, natural langage processing. A classifier prediction results and training set information…

Gradient GA: Gradient Genetic Algorithm for Drug Molecular Design

arXiv:2502.09860v2 Announce Type: replace-cross Abstract: Molecular discovery has brought great benefits to the chemical industry. Various molecule design techniques are developed to identify molecules with desirable properties. Traditional optimization methods, such as genetic algorithms, continue to achieve state-of-the-art results across…

Tool Zero: Training Tool-Augmented LLMs via Pure RL from Scratch

arXiv:2511.01934v1 Announce Type: new Abstract: Training tool-augmented LLMs has emerged as a promising approach to enhancing language models’ capabilities for complex tasks. The current supervised fine-tuning paradigm relies on constructing extensive domain-specific datasets to train models. However, this approach often…

Optimizing Kernel Discrepancies via Subset Selection

arXiv:2511.02706v1 Announce Type: cross Abstract: Kernel discrepancies are a powerful tool for analyzing worst-case errors in quasi-Monte Carlo (QMC) methods. Building on recent advances in optimizing such discrepancy measures, we extend the subset selection problem to the setting of kernel…