Functional Similarity Metric for Neural Networks: Overcoming Parametric Ambiguity via Activation Region Analysis

2026-04-20 19:00 GMT · 3 days ago aimagpro.com

arXiv:2604.16426v1 Announce Type: new
Abstract: As modern deep learning architectures grow in complexity, representational ambiguity emerges as a critical barrier to their interpretability and reliable merging. For ReLU networks, identical functional mappings can be achieved through entirely different weight configurations due to algebraic symmetries: neuron permutation and positive diagonal scaling. Consequently, traditional parameter-based comparison methods exhibit extreme instability to slight weight perturbations during training. This paper proposes a mathematically grounded approach to constructing a stable canonical representation of neural networks and a robust functional similarity metric. We shift focus from comparing raw weights to analyzing the topology of neuron activation regions. The algorithm first eliminates scaling ambiguity via L2-normalization of weight vectors with subsequent layer compensation. Next, discrete approximations of activation regions are generated as binary functional signatures evaluated over a data sample. To overcome the computational bottleneck of comparing large binary vectors, we adapt Locality-Sensitive Hashing, specifically MinHash, providing a fast and statistically precise approximation of the Jaccard index. The final cross-network neuron matching is formulated as a linear sum assignment problem solved via the Hungarian algorithm. We demonstrate theoretically and experimentally that our metric mitigates the neuron “flickering” effect and exhibits exceptional robustness to minor weight perturbations. This framework provides a solid foundation for model merging, transfer learning, objective assessment during pruning, and Explainable AI paradigms.