Archives AI News

BEP: A Binary Error Propagation Algorithm for Binary Neural Networks Training

arXiv:2512.04189v1 Announce Type: new Abstract: Binary Neural Networks (BNNs), which constrain both weights and activations to binary values, offer substantial reductions in computational complexity, memory footprint, and energy consumption. These advantages make them particularly well suited for deployment on resource-constrained…

MechDetect: Detecting Data-Dependent Errors

arXiv:2512.04138v1 Announce Type: new Abstract: Data quality monitoring is a core challenge in modern information processing systems. While many approaches to detect data errors or shifts have been proposed, few studies investigate the mechanisms governing error generation. We argue that…

Extending Graph Condensation to Multi-Label Datasets: A Benchmark Study

arXiv:2412.17961v2 Announce Type: replace Abstract: As graph data grows increasingly complicate, training graph neural networks (GNNs) on large-scale datasets presents significant challenges, including computational resource constraints, data redundancy, and transmission inefficiencies. While existing graph condensation techniques have shown promise in…

Sequential Monte Carlo for Policy Optimization in Continuous POMDPs

arXiv:2505.16732v3 Announce Type: replace Abstract: Optimal decision-making under partial observability requires agents to balance reducing uncertainty (exploration) against pursuing immediate objectives (exploitation). In this paper, we introduce a novel policy optimization framework for continuous partially observable Markov decision processes (POMDPs)…

The Initialization Determines Whether In-Context Learning Is Gradient Descent

arXiv:2512.04268v1 Announce Type: new Abstract: In-context learning (ICL) in large language models (LLMs) is a striking phenomenon, yet its underlying mechanisms remain only partially understood. Previous work connects linear self-attention (LSA) to gradient descent (GD), this connection has primarily been…

Random Feature Spiking Neural Networks

arXiv:2510.01012v2 Announce Type: replace Abstract: Spiking Neural Networks (SNNs) as Machine Learning (ML) models have recently received a lot of attention as a potentially more energy-efficient alternative to conventional Artificial Neural Networks. The non-differentiability and sparsity of the spiking mechanism…