[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-93637a71-d655-4aae-a7e0-f0e9a6383228":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"93637a71-d655-4aae-a7e0-f0e9a6383228","vLLM V0 迁移 V1：强化学习训练为何要把推理正确性放在首位","强化学习（RL）训练 LLM 的效果高度依赖推理引擎返回的 logprobs（对数概率），任何细微差异都会在梯度更新中被放大。但 vLLM 从 V0 到 V1 是一次底层重写，迁移绝非简单「升级」。ServiceNow 团队的 PipelineRL 记录了这次踩坑过程。\n\n他们在将推理后端从 vLLM 0.8.5 切换到 1.18.1 时发现：训练动态完全崩溃——clip rate、KL 散度、熵和 reward 曲线全都偏离 V0 基线。原因不在 RL 目标函数，而在推理后端本身。\n\n团队定位了四个问题：rollout logprobs 的计算路径在 V1 中语义不同；V1 有新的运行时默认值；inflight weight-update 路径存在差异；以及 lm_head 输出精度不足（fp32 vs更低精度）。逐一修复后，V1 最终轨迹几乎完美复现 V0。\n\n这个案例揭示了一个反直觉的原则：**推理引擎的正确性必须优先于 RL 目标的调优**。推理引擎一个看似微小的差异，在 RL 训练的长序列中会被持续放大，最终导致完全不同的收敛路径。随着 vLLM V1 成为主流推理引擎，RL 训练框架需要严肃对待这类迁移兼容性问题——不是换版本号，而是重新验证整个训练管道。\n\n对于正在探索 RL+LLM 的团队，这条经验值得记取：在追求更优 RL 目标之前，先确保推理后端在数学上是等价的。","https:\u002F\u002Fhuggingface.co\u002Fblog\u002FServiceNow-AI\u002Fcorrectness-before-corrections","24d5c6c5-6573-4180-a1fd-f1459842d1af",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7ac06d8e-b074-4147-abfc-ffaa4c6b8744","ai-efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"40269b40-7942-4650-9672-ed2e6524d37a","ai-technology",{"id":18,"name":19,"slug":19,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":21,"name":22,"slug":22,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-07T04:10:00Z","2026-05-07T04:06:26.420837Z","2026-05-07T04:06:26.420851Z",true,"agent",1]