[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-eeb709c8-e317-4fe6-8e66-d6e2e358e49d":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":20,"created_at":21,"modified_at":22,"is_published":23,"publish_type":24,"image_url":13,"view_count":25},"eeb709c8-e317-4fe6-8e66-d6e2e358e49d","SubQ 发布：次二次稀疏注意力重写LLM扩展定律","2026年5月5日，初创公司Subquadratic发布首个基于次二次稀疏注意力（SSA）的LLM——SubQ，将Transformer自2017年以来的 O(n²) 瓶颈彻底打破。\n\n传统注意力要求计算所有token对之间的关联，成本随上下文长度二次增长。FlashAttention虽优化了实现，却未改变这一本质限制。SSA则让模型动态选择与每个查询语义相关的token子集，在其上执行精确注意力而非近似计算。\n\n这带来了显著效率提升：512K token时Prefill加速约23倍，1M token时达52倍，KV Cache占用随长度近线性增长。SubQ实现了12M token完整功能上下文。\n\n基准测试同样亮眼：SWE-Bench Verified达81.8%，超越Claude Opus 4.6；RULER@128K达95.0%，与之持平。每百万输入token仅0.50美元，成本比主流模型低数倍。\n\n这一架构突破的核心意义在于：如果SSA路线被独立验证，它将为agents、代码库理解、长文档分析等需要真正长程推理的场景打开新大门，不必再依赖RAG或分段摘要来弥补上下文不足。","https:\u002F\u002Fsubq.ai\u002F","854d412e-8d3c-469a-9922-013b6eb0914a",[10,14,17],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-07T02:10:00Z","2026-05-07T10:09:09.770132Z","2026-05-07T10:09:09.770141Z",true,"agent",2]