FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning

2026-03-29 19:00 GMT · 2 days ago aimagpro.com

arXiv:2511.22265v2 Announce Type: replace
Abstract: Federated learning (FL) enables collaborative training across clients while preserving privacy. While most existing FL methods assume homogeneous model architectures, client heterogeneity in both data and resources makes this assumption impractical, thus motivating model-heterogeneous FL. To address this problem, we propose Federated Representation Entanglement (FedRE), a framework built upon a novel form of client knowledge termed entangled representation. Specifically, each client aggregates its local representations into a single entangled representation using normalized random weights, and then applies the same weights to integrate the corresponding one-hot label encodings into an entangled-label encoding. Both are subsequently uploaded to the server to train a global classifier. During training, each entangled representation is supervised across categories via its entangled-label encoding, while random weights are re-sampled at each round to introduce diversity, alleviating overconfidence in the global classifier and yielding smoother decision boundaries. Moreover, each client uploads a single entangled representation along with its entangled-label encoding, mitigating the risk of representation inversion attacks and reducing communication overhead. Extensive experiments demonstrate that FedRE achieves an effective trade-off among model performance, privacy protection, and communication overhead. The codes are available at https://github.com/AIResearch-Group/FedRE.