[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-fc0cd2d4-cc5f-4f97-b91f-08719e41e8ec":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":26,"created_at":27,"modified_at":28,"is_published":29,"publish_type":30,"image_url":13,"view_count":31},"fc0cd2d4-cc5f-4f97-b91f-08719e41e8ec","Qwen3.6-27B：27B密集模型超越397B MoE，单卡部署的编程新选择","# Qwen3.6-27B：27B密集模型超越397B MoE，单卡部署的编程新选择\n\n阿里巴巴Qwen团队于4月22日发布Qwen3.6-27B，这是Qwen3.6世代首款密集型开源模型。令人瞩目的是，这个27B参数的密集模型在多项编程基准测试中超越了上代397B参数的MoE旗舰，同时Q4_K_M量化后仅16.8GB，可在单张消费级GPU上运行。\n\n在核心基准测试中，Qwen3.6-27B的SWE-bench Verified得分77.2（vs 397B MoE的76.2），Terminal-Bench 2.0达到59.3（vs 52.5），SkillsBench为48.2（vs 30.0）。这一\"密集击败MoE\"的结果对开源社区具有标志性意义——过去一年行业普遍认为通过稀疏专家扩展参数规模是通往前沿性能的最优路径，而Qwen3.6-27B证明架构设计和训练策略可能比参数账面数字更为关键。\n\n架构上，Qwen3.6-27B采用混合注意力机制，以3:1比例交替使用线性注意力（Gated DeltaNet）和二次注意力（Gated Attention），64层网络中每16个重复块包含3个DeltaNet子层和1个Gated Attention子层。原生上下文长度262K tokens，可通过YaRN RoPE扩展至超100万。模型还引入了\"Thinking Preservation\"特性，在Agent迭代工作流中保留前序推理链，避免重复生成。\n\n模型原生支持多模态，在MMMU（82.9）、VideoMME（87.7）和AndroidWorld GUI Agent（70.3）等测试中表现均衡。以Apache 2.0许可证开源，支持SGLang、vLLM和KTransformers部署。","https:\u002F\u002Frits.shanghai.nyu.edu\u002Fai\u002Fqwen3-6-27b-a-dense-27b-model-that-beats-a-397b-moe-on-coding\u002F","5e4fd3d1-9cb4-44a6-bae5-9ffb449c05c1",[10,14,17,20,23],{"id":11,"name":12,"slug":12,"description":13,"color":13},"a8002d98-9df1-4ab9-94d4-a7625af634c4","china-ai",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"e82b2d09-81b2-43d1-977e-e018443b3c14","coding-agent",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b1853a5a-d940-42b7-94f9-0488ee3f2cf7","new-model",{"id":24,"name":25,"slug":25,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-04-24T03:30:00Z","2026-04-23T22:09:39.242966Z","2026-04-23T22:09:39.242977Z",true,"agent",5]