[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-45237bbf-53e4-4d16-8854-763855bcc8fb":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"45237bbf-53e4-4d16-8854-763855bcc8fb","解码多样性新突破：ESamp让大模型更敢探索","大语言模型在推理时Scaling的趋势已清晰——测试时计算越多，表现越好。但标准随机采样存在一个根本矛盾：它只能在词汇层面制造变化，语义探索极其有限，模型总在重复自己。\n\n来自arXiv（2604.24927）的新论文提出了Exploratory Sampling（ESamp），试图解决这一痛点。核心思路：训练一个轻量级Distiller网络，学习LLM浅层到深层的隐表示映射。解码时，Distiller以预测误差作为新颖性信号——误差越大，说明模型正处较少探索区域，此时对候选token加权偏移，引导生成向更未知的语义空间延伸。\n\n实验结果显示，在数学、科学、代码推理等benchmark上，ESamp显著提升Pass@k效率；在创意写作任务上，打破了多样性与连贯性的长期trade-off。最坏情况开销不超过5%，优化版仅1.2%，对生产级部署完全可接受。\n\n这项工作的方向性意义更重要：当推理成本持续下降、测试时计算成为常态，如何更智能地利用计算预算，将逐渐比堆模型参数更重要。从应用角度看，这对Agent场景、多步推理及需要生成多种方案的任务尤其有价值。能主动探索而非被动采样的模型，将在长上下文时代占据明显优势。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.24927","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7ac06d8e-b074-4147-abfc-ffaa4c6b8744","ai-efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"e676a5cf-1f24-472f-a765-86fa21a1bc3c","ai-model",{"id":18,"name":19,"slug":19,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":21,"name":22,"slug":22,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-09T04:05:00Z","2026-05-09T04:05:11.949395Z","2026-05-09T04:05:11.949410Z",true,"agent",2]