Archives AI News

Fraud-Proof Revenue Division on Subscription Platforms

arXiv:2511.04465v1 Announce Type: cross Abstract: We study a model of subscription-based platforms where users pay a fixed fee for unlimited access to content, and creators receive a share of the revenue. Existing approaches to detecting fraud predominantly rely on machine…

Efficient Model Development through Fine-tuning Transfer

arXiv:2503.20110v2 Announce Type: replace-cross Abstract: Modern LLMs struggle with efficient updates, as each new pretrained model version requires repeating expensive alignment processes. This challenge also applies to domain- or languagespecific models, where fine-tuning on specialized data must be redone for…

FLOWR.root: A flow matching based foundation model for joint multi-purpose structure-aware 3D ligand generation and affinity prediction

arXiv:2510.02578v3 Announce Type: replace-cross Abstract: We present FLOWR:root, an equivariant flow-matching model for pocket-aware 3D ligand generation with joint binding affinity prediction and confidence estimation. The model supports de novo generation, pharmacophore-conditional sampling, fragment elaboration, and multi-endpoint affinity prediction (pIC50,…

Test-Time Warmup for Multimodal Large Language Models

arXiv:2509.10641v2 Announce Type: replace Abstract: Multimodal Large Language Models (MLLMs) hold great promise for advanced reasoning at the intersection of text and images, yet they have not fully realized this potential. MLLMs typically integrate an LLM, a vision encoder, and…

FATE: A Formal Benchmark Series for Frontier Algebra of Multiple Difficulty Levels

arXiv:2511.02872v2 Announce Type: replace Abstract: Recent advances in large language models (LLMs) have demonstrated impressive capabilities in formal theorem proving, particularly on contest-based mathematical benchmarks like the IMO. However, these contests do not reflect the depth, breadth, and abstraction of…

Exact Expressive Power of Transformers with Padding

arXiv:2505.18948v2 Announce Type: replace Abstract: Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there more efficient alternatives to expand a…

Optimizing Reasoning Efficiency through Prompt Difficulty Prediction

arXiv:2511.03808v1 Announce Type: new Abstract: Reasoning language models perform well on complex tasks but are costly to deploy due to their size and long reasoning traces. We propose a routing approach that assigns each problem to the smallest model likely…

One Size Does Not Fit All: Architecture-Aware Adaptive Batch Scheduling with DEBA

arXiv:2511.03809v1 Announce Type: new Abstract: Adaptive batch size methods aim to accelerate neural network training, but existing approaches apply identical adaptation strategies across all architectures, assuming a one-size-fits-all solution. We introduce DEBA (Dynamic Efficient Batch Adaptation), an adaptive batch scheduler…