Archives AI News

Modality-Balanced Collaborative Distillation for Multi-Modal Domain Generalization

arXiv:2511.20258v1 Announce Type: cross Abstract: Weight Averaging (WA) has emerged as a powerful technique for enhancing generalization by promoting convergence to a flat loss landscape, which correlates with stronger out-of-distribution performance. However, applying WA directly to multi-modal domain generalization (MMDG)…

RFX: High-Performance Random Forests with GPU Acceleration and QLORA Compression

arXiv:2511.19493v1 Announce Type: new Abstract: RFX (Random Forests X), where X stands for compression or quantization, presents a production-ready implementation of Breiman and Cutler’s Random Forest classification methodology in Python. RFX v1.0 provides complete classification: out-of-bag error estimation, overall and…

New York Smells: A Large Multimodal Dataset for Olfaction

arXiv:2511.20544v1 Announce Type: cross Abstract: While olfaction is central to how animals perceive the world, this rich chemical sensory modality remains largely inaccessible to machines. One key bottleneck is the lack of diverse, multimodal olfactory training data collected in natural…

Elucidated Rolling Diffusion Models for Probabilistic Weather Forecasting

arXiv:2506.20024v2 Announce Type: replace Abstract: Diffusion models are a powerful tool for probabilistic forecasting, yet most applications in high-dimensional complex systems predict future states individually. This approach struggles to model complex temporal dependencies and fails to explicitly account for the…

Softmax Transformers are Turing-Complete

arXiv:2511.20038v1 Announce Type: cross Abstract: Hard attention Chain-of-Thought (CoT) transformers are known to be Turing-complete. However, it is an open problem whether softmax attention Chain-of-Thought (CoT) transformers are Turing-complete. In this paper, we prove a stronger result that length-generalizable softmax…

STARFlow-V: End-to-End Video Generative Modeling with Normalizing Flow

arXiv:2511.20462v1 Announce Type: cross Abstract: Normalizing flows (NFs) are end-to-end likelihood-based generative models for continuous data, and have recently regained attention with encouraging progress on image generation. Yet in the video generation domain, where spatiotemporal complexity and computational cost are…