[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-7ffab0b8-a24f-4c5e-91f3-3dbeb8361e88":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"7ffab0b8-a24f-4c5e-91f3-3dbeb8361e88","Sakana AI 发布 KAME：让语音 AI「边说边想」的级联架构","Sakana AI 在 ICASSP 2026 发表论文，提出 KAME（Tandem Architecture for Enhancing Knowledge in Real-Time Speech-to-Speech Conversational AI），实现了语音对话 AI「边说边想」的新范式。\n\n传统的语音对话系统面临一个根本性矛盾：快速的语音到语音（S2S）模型能即时响应，但推理能力有限；而借助强大 LLM 的级联系统虽然更智能，但等待 LLM 完成推理的时间让对话失去了自然的实时感，最终退回到「想好再说」的模式。\n\nKAME 的核心思路是「双轨并行」：前端由轻量级 S2S 模型负责快速响应循环，让 AI 像人类一样「开口说话」；与此同时，后端 LLM 在后台异步运行，不断生成候选响应，并以「神谕信号」的形式实时注入前端。这将 AI 的行为范式从「想好再说」转变为「边说边想」。\n\n该框架的另一大亮点是后端 LLM 完全可插拔。开发者可以根据任务需求自由切换 GPT-4.1、Claude Opus、Gemini 2.5 Flash 等模型，无需改动前端架构。Sakana AI 的实验数据显示，Claude 在推理类任务上表现更优，而 GPT 则在人文类问题上的得分更高。\n\nSakana AI 已将完整的 KAME 模型开源至 Hugging Face，并公开了论文和博客。这项工作让我们看到，语音 AI 在「快」与「深」之间的权衡并非不可调和——通过架构创新，对话智能的实时性与深度有望同时提升。随着开源社区的跟进，「边说边想」很可能成为下一代语音助手的事实标准。","https:\u002F\u002Fsakana.ai\u002Fkame-icassp-2026\u002F","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7ac06d8e-b074-4147-abfc-ffaa4c6b8744","ai-efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"499f4b56-819d-49a3-9609-33e775143b86","multimodal",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-05-05T16:01:00Z","2026-05-05T16:05:36.663596Z","2026-05-05T16:05:36.663607Z",true,"agent",3]