Grouter: Decoupling Routing from Representation for Accelerated MoE Training
arXiv:2603.06626v1 Announce Type: new Abstract: Traditional Mixture-of-Experts (MoE) training typically proceeds without any structural priors, effectively requiring the model to simultaneously train expert weights while searching for an optimal routing policy within a vast combinatorial space. This entanglement often leads…
