FedIA: A Plug-and-Play Importance-Aware Gradient Pruning Aggregation Method for Domain-Robust Federated Graph Learning on Node Classification

2025-10-13 19:00 GMT · 6 months ago aimagpro.com

arXiv:2509.18171v2 Announce Type: replace
Abstract: Federated Graph Learning (FGL) under domain skew — as observed on platforms such as emph{Twitch Gamers} and multilingual emph{Wikipedia} networks — drives client models toward incompatible representations, rendering naive aggregation both unstable and ineffective. We find that the culprit is not the weighting scheme but the emph{noisy gradient signal}: empirical analysis of baseline methods suggests that a vast majority of gradient dimensions can be dominated by domain-specific variance. We therefore shift focus from “aggregation-first” to a emph{projection-first} strategy that denoises client updates emph{before} they are combined. The proposed FedIA framework realises this underline{I}mportance-underline{A}ware idea through a two-stage, plug-and-play pipeline: (i) a server-side top-$rho$ mask keeps only the most informative about 5% of coordinates, and (ii) a lightweight influence-regularised momentum weight suppresses outlier clients. FedIA adds emph{no extra uplink traffic and only negligible server memory}, making it readily deployable. On both homogeneous (Twitch Gamers) and heterogeneous (Wikipedia) graphs, it yields smoother, more stable convergence and higher final accuracy than nine strong baselines. A convergence sketch further shows that dynamic projection maintains the optimal $mathcal{O}(sigma^{2}/sqrt{T})$ rate.