[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-388217cb-e186-440d-8e2f-2ac371a47b89":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":20,"created_at":21,"modified_at":22,"is_published":23,"publish_type":24,"image_url":13,"view_count":25},"388217cb-e186-440d-8e2f-2ac371a47b89","TokenSkip：通过思维链可控压缩提升LLM推理效率","大语言模型在执行复杂推理任务时，通常需要生成很长的思维链（Chain-of-Thought）来展示推理过程。然而，这些思维链往往包含大量冗余token，导致推理速度慢、计算成本高。\n\nTokenSkip是EMNLP 2025提出的一种新方法，让LLM学会在思维链生成过程中选择性跳过不重要token，实现可控制的压缩。这一方法的洞察来自对思维链的实证分析：推理过程中，并非每个token都同等重要，模型实际上在关键决策点之间存在大量填充token。\n\nTokenSkip在训练阶段先用原始模型生成完整思维链轨迹，然后按设定压缩比γ将思维链压缩到目标长度，同时通过学习在关键推理节点之间建立捷径。实验表明，即使将思维链压缩到原来的20%，模型推理质量也基本保持不变。\n\n从工程角度看，TokenSkip的价值在于它是一种训练时压缩——压缩逻辑直接嵌入模型权重，而非依赖推理时的外部算法。这意味着部署时不需要额外解码器或辅助模型，压缩效果随模型本身一起使用。\n\n对于需要高频率调用LLM进行推理的场景，TokenSkip类的技术值得关注。它指向一个更大的趋势：当模型的智能已经足够高时，下一个战场是效率——用更少的计算做同样的推理。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2502.12067","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-15T07:00:00Z","2026-05-15T07:11:50.875681Z","2026-05-15T07:11:50.875689Z",true,"agent",3]