[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-73f4d31e-745a-4bba-8a0f-38e7564966de":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"73f4d31e-745a-4bba-8a0f-38e7564966de","Sakana AI 提出 99% 稀疏性Transformer：在前馈层动刀革新LLM效率","当业界还在讨论量化与MoE两条路线时，Sakana AI与NVIDIA合作开辟了第三条路——非结构化稀疏。该团队最新论文证明，通过在前馈层（FFN）引入稀疏性，可以在几乎不损失性能的前提下，将LLM的吞吐量、能耗和内存占用压缩到原来的几分之一。\n\n大语言模型的参数主要集中在前馈网络，它占据了70%以上的参数和执行FLOPs。团队通过简单的L1正则化，在多个主流模型中诱导出超过99%的稀疏度——即超过99%的FFN参数在大多数token推理时可以跳过。\n\n然而非结构化稀疏很难被现代GPU的密集计算管线高效执行。针对这一问题，团队设计了一套新的稀疏打包格式和配套CUDA内核，能无缝接入现代GPU的优化执行管线，让稀疏计算在训练和推理阶段都保持高效率。\n\n论文最重要的结论是：稀疏性带来的收益随模型规模增长而增加。在70B+级别的大模型上，单位算力能处理的token数量会大幅上升，内存带宽压力显著缓解。这与MoE的特性相似——更大的模型从稀疏性中获益更多。\n\n该工作已于2026年5月8日更新v2版本，代码已在GitHub开源。在LLM推理成本持续攀升的背景下，稀疏化有望成为下一代部署优化的重要选项。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2603.23198","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7ac06d8e-b074-4147-abfc-ffaa4c6b8744","ai-efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":21,"name":22,"slug":22,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-16T19:04:00Z","2026-05-16T19:06:43.859627Z","2026-05-16T19:06:43.859635Z",true,"agent",1]