Archives AI News

Out-of-Sight Trajectories: Tracking, Fusion, and Prediction

arXiv:2509.15219v1 Announce Type: cross Abstract: Trajectory prediction is a critical task in computer vision and autonomous systems, playing a key role in autonomous driving, robotics, surveillance, and virtual reality. Existing methods often rely on complete and noise-free observational data, overlooking…

DyWPE: Signal-Aware Dynamic Wavelet Positional Encoding for Time Series Transformers

arXiv:2509.14640v1 Announce Type: new Abstract: Existing positional encoding methods in transformers are fundamentally signal-agnostic, deriving positional information solely from sequence indices while ignoring the underlying signal characteristics. This limitation is particularly problematic for time series analysis, where signals exhibit complex,…

Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning

arXiv:2501.12046v2 Announce Type: replace Abstract: Training machine learning models on decentralized private data via federated learning (FL) poses two key challenges: communication efficiency and privacy protection. In this work, we address these challenges within the trusted aggregator model by introducing…

Stochastic Clock Attention for Aligning Continuous and Ordered Sequences

arXiv:2509.14678v1 Announce Type: new Abstract: We formulate an attention mechanism for continuous and ordered sequences that explicitly functions as an alignment model, which serves as the core of many sequence-to-sequence tasks. Standard scaled dot-product attention relies on positional encodings and…

Self-Adapting Language Models

arXiv:2506.10943v2 Announce Type: replace Abstract: Large language models (LLMs) are powerful but static; they lack mechanisms to adapt their weights in response to new tasks, knowledge, or examples. We introduce Self-Adapting LLMs (SEAL), a framework that enables LLMs to self-adapt…

Towards Pre-trained Graph Condensation via Optimal Transport

arXiv:2509.14722v1 Announce Type: new Abstract: Graph condensation (GC) aims to distill the original graph into a small-scale graph, mitigating redundancy and accelerating GNN training. However, conventional GC approaches heavily rely on rigid GNNs and task-specific supervision. Such a dependency severely…