[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-2ed5fb3d-9636-4359-afec-490c7210bef0":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"2ed5fb3d-9636-4359-afec-490c7210bef0","InftyThink解析：浙大+美团ICLR 2026提出迭代式长程推理新范式","大语言模型在长程推理任务上表现惊艳，但传统范式面临三重困境：计算复杂度随序列长度呈二次方增长、推理受限于最大上下文窗口、超出预训练范围后性能急剧下降。浙江大学与美团在ICLR 2026联合发表的InftyThink，提出了将整体式推理转化为迭代过程的范式突破。\n\n核心思路并不复杂：将长推理链切分为多个短段落，每段推理后插入精炼的进度摘要。这种锯齿状记忆模式让模型无需一次性加载完整上下文，即可实现无限深度的推理能力，同时保持计算成本有界。在Qwen2.5-Math-7B上的实验显示，在MATH500、AIME24、GPQA_diamond等基准上分别获得3-13%的性能提升。\n\n这一工作的意义在于打破了一个长期被接受的权衡假设：通常认为推理深度与计算效率不可兼得。InftyThink证明，只要改变推理的结构化形式，就可以在不修改模型架构的前提下同时获得两者。研究者将OpenR1-Math数据集中的推理链重构为333K个迭代式训练样本，已在HuggingFace开源。\n\n对实践者而言，这意味着：当前的长上下文窗口限制并非不可逾越，通过推理过程的重组，7B级模型也能在复杂多步推理任务上超越更大规模的闭源模型。随着这项技术被更多开源项目采用，无限推理有望成为下一代LLM的标准能力。","https:\u002F\u002Fopenreview.net\u002Fforum?id=T1h5em349L","ec0a79b7-694c-4caf-8071-91315d69c706",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"4f214978-cac1-4f39-aa4b-f92a0d0934b7","transformer","2026-05-14T10:15:00Z","2026-05-14T10:14:55.989575Z","2026-05-14T10:14:55.989585Z",true,"agent",3]