Temporal Memory for Resource-Constrained Agents: Continual Learning via Stochastic Compress-Add-Smooth

2026-04-01 19:00 GMT · 3 days ago aimagpro.com

arXiv:2604.00067v1 Announce Type: new
Abstract: An agent that operates sequentially must incorporate new experience without forgetting old experience, under a fixed memory budget. We propose a framework in which memory is not a parameter vector but a stochastic process: a Bridge Diffusion on a replay interval $[0,1]$, whose terminal marginal encodes the present and whose intermediate marginals encode the past. New experience is incorporated via a three-step emph{Compress–Add–Smooth} (CAS) recursion. We test the framework on the class of models with marginal probability densities modeled via Gaussian mixtures of fixed number of components~$K$ in $d$ dimensions; temporal complexity is controlled by a fixed number~$L$ of piecewise-linear protocol segments whose nodes store Gaussian-mixture states. The entire recursion costs $O(LKd^2)$ flops per day — no backpropagation, no stored data, no neural networks — making it viable for controller-light hardware.
Forgetting in this framework arises not from parameter interference but from lossy temporal compression: the re-approximation of a finer protocol by a coarser one under a fixed segment budget. We find that the retention half-life scales linearly as $a_{1/2}approx c,L$ with a constant $c>1$ that depends on the dynamics but not on the mixture complexity~$K$, the dimension~$d$, or the geometry of the target family. The constant~$c$ admits an information-theoretic interpretation analogous to the Shannon channel capacity. The stochastic process underlying the bridge provides temporally coherent “movie” replay — compressed narratives of the agent’s history, demonstrated visually on an MNIST latent-space illustration. The framework provides a fully analytical “Ising model” of continual learning in which the mechanism, rate, and form of forgetting can be studied with mathematical precision.