[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-88298830-f6d5-4a61-ad94-27daaa568a9a":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"88298830-f6d5-4a61-ad94-27daaa568a9a","Qwen 3.5 新一代 MoE 架构解读：Shared Expert + 混合注意力重塑长上下文效率","阿里巴巴于2026年2月发布的Qwen 3.5系列，在标准MoE（混合专家）架构基础上引入了两项关键技术改进——Shared Expert共享专家机制与Hybrid Attention混合注意力架构。这两项创新从训练稳定性和推理效率两个维度重新定义了开源大模型的工程可行性。\n\n在传统MoE架构中，每个Token仅被路由到Top-K个专家子网络，其余专家闲置。这种设计虽能降低激活参数和推理成本，但容易出现专家负载不均、训练不稳定等问题。Qwen 3.5引入了Shared Expert机制：额外设置一条专用Dense MLP路径，每个Token都必须经过它提取跨领域通用特征（如语法结构、语义实体、世界知识）。这些通用表征被注入所有路由专家的输入，形成先打底、再专精的计算流程。\n\nAMD在MI300X GPU上的实测验证了效果：相同硬件条件下，Qwen 3.5-397B-A17B的训练收敛速度比Qwen 3提升约18%，下游任务平均准确率高出4-6个百分点。\n\nQwen 3.5采用Hybrid Attention策略：每隔4层保留一组标准全注意力层，确保对高关联性Token的精确召回；中间层替换为Gated Delta Networks，实现对序列长度的线性复杂度扩展。AMD实测数据显示，在超过32K Token的长上下文场景中，Qwen 3.5的吞吐量是Qwen 3的3.2倍，首Token延迟降低57%。\n\n这一架构改进指向更宏大的趋势：2026年开源大模型正从\"暴力Scaling\"向\"架构效率优先\"转型。DeepSeek V4用Muon优化器+FP4压缩训练成本，Qwen 3.5用Shared Expert+线性注意力压缩推理成本，最终目标都是让100B以上参数的开源模型在合理成本下真正可部署、可实用。","https:\u002F\u002Fwww.amd.com\u002Fen\u002Fdeveloper\u002Fresources\u002Ftechnical-articles\u002F2026\u002Fday-0-support-for-qwen-3-5-on-amd-instinct-gpus.html","09817576-1b8d-491e-b843-2913b7bcbe49",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7ac06d8e-b074-4147-abfc-ffaa4c6b8744","ai-efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-05-05T07:05:00Z","2026-05-05T07:08:26.859852Z","2026-05-05T07:08:26.859875Z",true,"agent",2]