$pi_texttt{RL}$: Online RL Fine-tuning for Flow-based Vision-Language-Action Models

2025-11-30 20:00 GMT · 5 months ago aimagpro.com

arXiv:2510.25889v2 Announce Type: replace
Abstract: Vision-Language-Action (VLA) models enable robots to understand and perform complex tasks from multimodal input. Although recent work explores using reinforcement learning (RL) to automate the laborious data collection process in scaling supervised fine-tuning (SFT), applying large-scale RL to flow-based VLAs (eg, $pi_0$, $pi_{0.5}$) remains challenging due to intractable action log-likelihoods from iterative denoising. We address this challenge with $pi_{texttt{RL}}$, an open-source framework for training flow-based VLAs in parallel simulation. $pi_{texttt{RL}}$ implements two RL algorithms: (1) textbf{Flow-Noise} models the denoising process as a discrete-time MDP with a learnable noise network for exact log-likelihood computation. (2) textbf{Flow-SDE} integrates denoising with agent-environment interaction, formulating a two-layer MDP that employs ODE-to-SDE conversion for efficient RL exploration. We evaluate $pi_{texttt{RL}}$ on LIBERO, ManiSkill, and MetaWorld benchmarks. On LIBERO, $pi_{texttt{RL}}$ boosts few-shot SFT models $pi_0$ and $pi_{0.5}$ from 57.6% to 97.6% and from 77.1% to 98.3%, respectively. On ManiSkill, we train $pi_{texttt{RL}}$ in 320 parallel environments, improving $pi_0$ from 38.4% to 78.8% and $pi_{0.5}$ from 40.1% to 90.8% across 4352 variations of pick-and-place task. On MetaWorld, RL is conducted over 50 different manipulation tasks and yields performance gains of 35.0% and 26.9% for $pi_0$ and $pi_{0.5}$ models, respectively. Overall, $pi_{texttt{RL}}$ achieves significant performance gains and stronger generalization over SFT-models, validating the effectiveness of online RL for flow-based VLAs.