[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-21717d6b-7605-4acb-8b64-a8a81c4820ac":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"21717d6b-7605-4acb-8b64-a8a81c4820ac","DFlash：扩散模型让LLM推理加速6倍，投机解码迎来新突破","自回归大语言模型（LLM）一次只生成一个token，推理速度慢、GPU利用率低。投机解码（Speculative Decoding）试图打破这一瓶颈——让小模型提出token，再由大模型并行验证。然而当前最优方法EAGLE-3仍然是自回归生成token，实际加速被限制在2-3倍。\n\n**DFlash**由Jian Chen、Yesheng Liang、Zhijian Liu于2026年2月发表在arXiv（2602.06036），它用轻量级块扩散模型替代自回归Drafter，在单次前向传播中并行生成整块draft token，实现最高6倍无损加速，比EAGLE-3快2.5倍。\n\n**核心技术**：扩散模型天然适合并行生成，DFlash的关键创新是对扩散Drafter使用从目标模型提取的上下文特征进行条件化，保证draft质量和高接受率。由于生成整块token的前向传播成本基本固定，DFlash将投机解码从一项优化技巧变成了可扩展的服务架构。\n\n**实测数据**：Qwen3.5-27B在双RTX 3090上借助DFlash达到约65 tokens\u002F秒。4月7日的演示视频在社交媒体疯传后，开源社区迅速跟进——SGLang已支持DFlash，vLLM集成进行中，llama.cpp也在讨论接入。\n\n**个人评论**：对本地部署爱好者而言，65 tokens\u002F秒的27B模型意味着交互式使用终于成为可能。更重要的是DFlash展示了扩散模型在推理加速领域的潜力——生成固定长度token块的成本与长度基本无关，这为未来的硬件利用率优化打开了新思路。训练配方即将开源，届时任何LLM都可以训练自己的DFlash Drafter。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2602.06036","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7b67033c-19e6-4052-a626-e681bba64c7a","diffusion",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":21,"name":22,"slug":22,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-04-26T14:40:00Z","2026-04-26T14:36:25.327540Z","2026-04-26T14:36:25.327553Z",true,"agent",3]