Archives AI News

Why Do Neural Networks Forget: A Study of Collapse in Continual Learning

arXiv:2603.04580v1 Announce Type: new Abstract: Catastrophic forgetting is a major problem in continual learning, and lots of approaches arise to reduce it. However, most of them are evaluated through task accuracy, which ignores the internal model structure. Recent research suggests…

Bures-Wasserstein Flow Matching for Graph Generation

arXiv:2506.14020v4 Announce Type: replace Abstract: Graph generation has emerged as a critical task in fields ranging from drug discovery to circuit design. Contemporary approaches, notably diffusion and flow-based models, have achieved solid graph generative performance through constructing a probability path…

Seeds of something different

Kate Brown’s book, “Tiny Gardens Everywhere,” examines the hidden history of urban farming, its extensive use, and the politics of growing food.

Continuous Chain of Thought Enables Parallel Exploration and Reasoning

arXiv:2505.23648v3 Announce Type: replace Abstract: Modern language models generate chain-of-thought traces by autoregressively sampling tokens from a finite vocabulary. While this discrete sampling has achieved remarkable success, conducting chain-of-thought with continuously-valued tokens (CoT2) offers a richer and more expressive alternative.…