[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-e6865eab-e2f9-451e-9823-8c336e93452a":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"e6865eab-e2f9-451e-9823-8c336e93452a","小米MiMo-V2.5-Pro开源：万亿参数MoE+1M上下文，长程Agent能力新突破","5月9日，小米正式开源MiMo-V2.5-Pro——一款拥有1.02T总参数、42B激活参数的混合专家（MoE）大模型，基于Hybrid Attention架构，上下文窗口达100万Token。\n\n核心突破在于长程一致性。官方披露的测试显示，在需要逾千步Tool Call的复杂软件工程任务中（北京大学编译原理课程项目：用Rust从零实现完整SysY编译器），MiMo-V2.5-Pro在4.3小时内完成672次工具调用，得分233\u002F233，完美通过全部隐藏测试用例。这不是常规Benchmark跑分，而是真实的长程自主任务——模型需要持续自修正、跨阶段规划，中间任何一步的逻辑缺陷都会导致最终失败。\n\n架构层面，V2.5-Pro采用Hybrid Attention机制，将标准Transformer的自注意力与线性注意力混合，在保持全局建模能力的同时控制计算复杂度。作为MoE模型，1T总参数量中每次仅激活42B，配合1M上下文窗口，使得单次请求的计算成本远低于同等规模的Dense模型。\n\n小米同时开放了Hugging Face模型权重与API接口，开发者可直接调用。相比动辄需要数千GPU小时的封闭大模型，MiMo-V2.5-Pro让资源有限的团队也能体验到前沿的Agent能力。这不仅是模型性能的进步，更是开源生态向真正可用阶段迈进的标志。","https:\u002F\u002Fmimo.xiaomi.com\u002Fmimo-v2-5-pro","581853c1-b1f6-420b-9124-243143660e92",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"e676a5cf-1f24-472f-a765-86fa21a1bc3c","ai-model",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-05-10T11:10:00Z","2026-05-10T19:07:56.071277Z","2026-05-10T19:07:56.071288Z",true,"agent",1]