Archives AI News

Echoes Before Collapse: Deep Learning Detection of Flickering in Complex Systems

arXiv:2509.04683v1 Announce Type: new Abstract: Deep learning offers powerful tools for anticipating tipping points in complex systems, yet its potential for detecting flickering (noise-driven switching between coexisting stable states) remains unexplored. Flickering is a hallmark of reduced resilience in climate systems, ecosystems, financial markets, and other systems. It can precede critical regime shifts that are highly impactful but difficult to predict. Here we show that convolutional long short-term memory (CNN LSTM) models, trained on synthetic time series generated from simple polynomial functions with additive noise, can accurately identify flickering patterns. Despite being trained on simplified dynamics, our models generalize to diverse stochastic systems and reliably detect flickering in empirical datasets, including dormouse body temperature records and palaeoclimate proxies from the African Humid Period. These findings demonstrate that deep learning can extract early warning signals from noisy, nonlinear time series, providing a flexible framework for identifying instability across a wide range of dynamical systems.

On approximating the $f$-divergence between two Ising models

arXiv:2509.05016v1 Announce Type: cross Abstract: The $f$-divergence is a fundamental notion that measures the difference between two distributions. In this paper, we study the problem of approximating the $f$-divergence between two Ising models, which is a generalization of recent work on approximating the TV-distance. Given two Ising models $nu$ and $mu$, which are specified by their interaction matrices and external fields, the problem is to approximate the $f$-divergence $D_f(nu,|,mu)$ within an arbitrary relative error $mathrm{e}^{pm varepsilon}$. For $chi^alpha$-divergence with a constant integer $alpha$, we establish both algorithmic and hardness results. The algorithm works in a parameter regime that matches the hardness result. Our algorithm can be extended to other $f$-divergences such as $alpha$-divergence, Kullback-Leibler divergence, R'enyi divergence, Jensen-Shannon divergence, and squared Hellinger distance.

KRAFT: A Knowledge Graph-Based Framework for Automated Map Conflation

arXiv:2509.04684v1 Announce Type: new Abstract: Digital maps play a crucial role in various applications such as navigation, fleet management, and ride-sharing, necessitating their accuracy and currency, which require timely updates. While the majority of geospatial databases (GDBs) provide high-quality information, their data is (i) limited to specific regions and/or (ii) missing some entities, even in their covered areas. Map conflation is the process of augmentation of a GDB using another GDB to conflate missing spatial features. Existing map conflation methods suffer from two main limitations: (1) They are designed for the conflation of linear objects (e.g., road networks) and cannot simply be extended to non-linear objects, thus missing information about most entities in the map. (2) They are heuristic algorithmic approaches that are based on pre-defined rules, unable to learn entities matching in a data-driven manner. To address these limitations, we design KRAFT, a learning based approach consisting of three parts: (1) Knowledge Graph Construction - where each GDB is represented by a knowledge graph, (2) Map Matching - where we use a knowledge graph alignment method as well as a geospatial feature encoder to match entities in obtained knowledge graphs, and (3) Map Merging - where we merge matched entities in the previous modules in a consistent manner, using a mixed integer linear programming formulation that fully merges the GDBs without adding any inconsistencies. Our experimental evaluation shows that not only does KRAFT achieve outstanding performance compared to state-of-the-art and baseline methods in map conflation tasks, but each of its modules (e.g., Map Matching and Map Merging) also separately outperforms traditional matching and merging methods.

Room-acoustic simulations as an alternative to measurements for audio-algorithm evaluation

arXiv:2509.05175v1 Announce Type: cross Abstract: Audio-signal-processing and audio-machine-learning (ASP/AML) algorithms are ubiquitous in modern technology like smart devices, wearables, and entertainment systems. Development of such algorithms and models typically involves a formal evaluation to demonstrate their effectiveness and progress beyond the state-of-the-art. Ideally, a thorough evaluation should cover many diverse application scenarios and room-acoustic conditions. However, in practice, evaluation datasets are often limited in size and diversity because they rely on costly and time-consuming measurements. This paper explores how room-acoustic simulations can be used for evaluating ASP/AML algorithms. To this end, we evaluate three ASP/AML algorithms with room-acoustic measurements and data from different simulation engines, and assess the match between the evaluation results obtained from measurements and simulations. The presented investigation compares a numerical wave-based solver with two geometrical acoustics simulators. While numerical wave-based simulations yielded similar evaluation results as measurements for all three evaluated ASP/AML algorithms, geometrical acoustic simulations could not replicate the measured evaluation results as reliably.

CPEP: Contrastive Pose-EMG Pre-training Enhances Gesture Generalization on EMG Signals

arXiv:2509.04699v1 Announce Type: new Abstract: Hand gesture classification using high-quality structured data such as videos, images, and hand skeletons is a well-explored problem in computer vision. Leveraging low-power, cost-effective biosignals, e.g. surface electromyography (sEMG), allows for continuous gesture prediction on wearables. In this paper, we demonstrate that learning representations from weak-modality data that are aligned with those from structured, high-quality data can improve representation quality and enables zero-shot classification. Specifically, we propose a Contrastive Pose-EMG Pre-training (CPEP) framework to align EMG and pose representations, where we learn an EMG encoder that produces high-quality and pose-informative representations. We assess the gesture classification performance of our model through linear probing and zero-shot setups. Our model outperforms emg2pose benchmark models by up to 21% on in-distribution gesture classification and 72% on unseen (out-of-distribution) gesture classification.

LatticeWorld: A Multimodal Large Language Model-Empowered Framework for Interactive Complex World Generation

arXiv:2509.05263v1 Announce Type: cross Abstract: Recent research has been increasingly focusing on developing 3D world models that simulate complex real-world scenarios. World models have found broad applications across various domains, including embodied AI, autonomous driving, entertainment, etc. A more realistic simulation with accurate physics will effectively narrow the sim-to-real gap and allow us to gather rich information about the real world conveniently. While traditional manual modeling has enabled the creation of virtual 3D scenes, modern approaches have leveraged advanced machine learning algorithms for 3D world generation, with most recent advances focusing on generative methods that can create virtual worlds based on user instructions. This work explores such a research direction by proposing LatticeWorld, a simple yet effective 3D world generation framework that streamlines the industrial production pipeline of 3D environments. LatticeWorld leverages lightweight LLMs (LLaMA-2-7B) alongside the industry-grade rendering engine (e.g., Unreal Engine 5) to generate a dynamic environment. Our proposed framework accepts textual descriptions and visual instructions as multimodal inputs and creates large-scale 3D interactive worlds with dynamic agents, featuring competitive multi-agent interaction, high-fidelity physics simulation, and real-time rendering. We conduct comprehensive experiments to evaluate LatticeWorld, showing that it achieves superior accuracy in scene layout generation and visual fidelity. Moreover, LatticeWorld achieves over a $90times$ increase in industrial production efficiency while maintaining high creative quality compared with traditional manual production methods. Our demo video is available at https://youtu.be/8VWZXpERR18

Natural Spectral Fusion: p-Exponent Cyclic Scheduling and Early Decision-Boundary Alignment in First-Order Optimization

arXiv:2509.04713v1 Announce Type: new Abstract: Spectral behaviors have been widely discussed in machine learning, yet the optimizer's own spectral bias remains unclear. We argue that first-order optimizers exhibit an intrinsic frequency preference that significantly reshapes the optimization path. To address this, we propose Natural Spectral Fusion (NSF): reframing training as controllable spectral coverage and information fusion rather than merely scaling step sizes. NSF has two core principles: treating the optimizer as a spectral controller that dynamically balances low- and high-frequency information; and periodically reweighting frequency bands at negligible cost, without modifying the model, data, or training pipeline. We realize NSF via a p-exponent extension of the second-moment term, enabling both positive and negative exponents, and implement it through cyclic scheduling. Theory and experiments show that adaptive methods emphasize low frequencies, SGD is near-neutral, and negative exponents amplify high-frequency information. Cyclic scheduling broadens spectral coverage, improves cross-band fusion, and induces early decision-boundary alignment, where accuracy improves even while loss remains high. Across multiple benchmarks, with identical learning-rate strategies and fixed hyperparameters, p-exponent cyclic scheduling consistently reduces test error and demonstrates distinct convergence behavior; on some tasks, it matches baseline accuracy with only one-quarter of the training cost. Overall, NSF reveals the optimizer's role as an active spectral controller and provides a unified, controllable, and efficient framework for first-order optimization.

Reinforcement Learning for Aligning Large Language Models Agents with Interactive Environments: Quantifying and Mitigating Prompt Overfitting

arXiv:2410.19920v3 Announce Type: replace Abstract: Reinforcement learning (RL) is a promising approach for aligning large language models (LLMs) knowledge with sequential decision-making tasks. However, few studies have thoroughly investigated the impact on LLM agents capabilities of fine-tuning them with RL in a specific environment. In this paper, we propose a novel framework to analyze the sensitivity of LLMs to prompt formulations following RL training in a textual environment. Our findings reveal that the performance of LLMs degrades when faced with prompt formulations different from those used during the RL training phase. Besides, we analyze the source of this sensitivity by examining the model's internal representations and salient tokens. Finally, we propose to use a contrastive loss to mitigate this sensitivity and improve the robustness and generalization capabilities of LLMs.

CoVeR: Conformal Calibration for Versatile and Reliable Autoregressive Next-Token Prediction

arXiv:2509.04733v1 Announce Type: new Abstract: Autoregressive pre-trained models combined with decoding methods have achieved impressive performance on complex reasoning tasks. While mainstream decoding strategies such as beam search can generate plausible candidate sets, they often lack provable coverage guarantees, and struggle to effectively balance search efficiency with the need for versatile trajectories, particularly those involving long-tail sequences that are essential in certain real-world applications. To address these limitations, we propose textsc{CoVeR}, a novel model-free decoding strategy wihtin the conformal prediction framework that simultaneously maintains a compact search space and ensures high coverage probability over desirable trajectories. Theoretically, we establish a PAC-style generalization bound, guaranteeing that textsc{CoVeR} asymptotically achieves a coverage rate of at least $1 - alpha$ for any target level $alpha in (0,1)$.