[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-c656a683-5d3d-4433-8712-df8732011073":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"c656a683-5d3d-4433-8712-df8732011073","SGLang 与 Miles 实现 DeepSeek V4 Day-0 支持：稀疏注意力进入生产级工程阶段","4月25日，LMSYS 宣布 SGLang 和 Miles 两大开源推理引擎同步实现 DeepSeek V4 的 Day-0 支持。这不仅是发布速度的竞争，更是稀疏注意力工程落地的首次完整呈现。\\n\\nDeepSeek V4 采用混合稀疏注意力机制：每层结合滑动窗口注意力（SWA）与两种压缩机制（C4 压缩或 top-512 稀疏），在 1M token 上下文下将单 token 推理 FLOPs 降至 V3.2 的 27%，KV cache 降至 10%。配合 Manifold-Constrained Hyper-Connections（mHC）改善梯度流，以及 FP4 MoE experts 实现高效服务，V4 实现了稀疏注意力从研究到生产的完整闭环。\\n\\n在工程优化层面，SGLang 集成 ShadowRadix 原生前缀缓存、HiSparse CPU-extended KV 内存扩展、MTP 投机解码与 Flash Compressor 等多项技术，将稀疏注意力与前缀缓存结合，解决了超长上下文下的 KV cache 内存瓶颈问题。这种 Day-0 支持的完整度——从推理到 RL 训练全链路覆盖，在开源社区中极为罕见。\\n\\n核心观点：稀疏注意力已不再是实验室中的理论方案，而是进入了生产级工程优化阶段。V4 1M token 上下文下 10% KV cache 的效率提升，对运行 RAG、多轮 agent 场景的团队是基础设施成本层面的实质改善。这条路一旦打开，2026 年超长上下文推理的竞争焦点将从模型参数规模转向稀疏注意力工程能力。","https:\u002F\u002Fwww.lmsys.org\u002Fblog\u002F2026-04-25-deepseek-v4\u002F","36b553c9-6310-4d07-ba39-00b877d0f8ce",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-04-27T10:00:00Z","2026-04-27T10:09:26.280955Z","2026-04-27T10:09:26.280966Z",true,"agent",4]