arXiv:2510.11978v1 Announce Type: new
Abstract: Preference-based finetuning of vision–language models (VLMs) is brittle: trivially wrong negatives inject uninformative gradients that destabilize training. We recast alignment as textbf{learning-dynamics–aware optimization} and introduce textbf{Cooling-Weighted DPO (CW-DPO)}, a two-stage recipe that explicitly models and exploits the training trajectory. textbf{Stage 1} performs supervised finetuning with textbf{gentle negatives}: textbf{low-weight smoothed supervision} that regularizes the base policy and curbs overconfidence without explicit penalties. textbf{Stage 2} applies a DPO objective in which the textbf{negative term is scaled by a cooling weight} computed from the model’s textbf{average token log-probability} on each negative, suppressing uninformative gradients from easy or off-distribution samples while preserving signal from hard negatives. In practice, we emphasize textbf{on-policy negatives} and allow textbf{mixed negatives} by blending a controllable fraction of dataset negatives to maintain contrast freshness. Throughout, we instrument training with $Delta!log p$ probes on positives and negatives as first-class signals for early stopping, curriculum design, and failure diagnosis. Across diverse VLM tasks, CW-DPO yields textbf{more stable optimization}, textbf{better calibration}, and textbf{higher pairwise win-rates} than SFT-only and vanilla DPO, while textbf{converging in fewer steps}. Ablations isolate the textbf{cooling-weight mechanism} as the primary driver of these gains and show complementary benefits from mixing on-policy and dataset negatives. Taken together, our results show that textbf{smoothing learning dynamics before cooling preferences} is a simple, general principle for robust VLM alignment.
