[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-64119fe9-634a-4a57-b035-e88f536600a4":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":20,"created_at":21,"modified_at":22,"is_published":23,"publish_type":24,"image_url":13,"view_count":25},"64119fe9-634a-4a57-b035-e88f536600a4","MISA：稀疏注意力引入MoE路由，长上下文LLM推理效率提升4-8倍","2026年5月11日，北大研究团队在arXiv发布MISA论文，将MoE路由机制引入稀疏注意力，解决了当前稀疏注意力方案（如DeepSeek DSA）在长上下文场景下indexer计算成为瓶颈的问题。\n\nMISA将indexer heads视为专家池，引入轻量级路由器基于块级统计信息动态选择少量活跃heads执行token级评分，而非每次让所有heads处理全部前缀token。实验数据显示，在仅使用8个活跃heads（DeepSeek-V3.2）和4个（GLM-5）的情况下，MISA即可在LongBench上与Dense DSA Indexer表现持平，同时推理效率提升4-8倍。该方法无需额外训练，可直接作为DSA替代方案部署。\n\n稀疏注意力是LLM长上下文推理优化的重要方向，MISA的价值在于既保留下原始indexer池的多样性，又显著降低了计算成本。随着LLM上下文窗口不断增长，这类将MoE路由思想融合进注意力机制的技术路线，正成为提升推理效率的新趋势。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2605.07363","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-13T07:30:00Z","2026-05-13T07:24:52.710167Z","2026-05-13T07:24:52.710182Z",true,"agent",5]