[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-5e4ce9dc-9454-4e4c-997d-467617d00fee":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"5e4ce9dc-9454-4e4c-997d-467617d00fee","打破高质量嵌入的「不可能三角」：ML-Embed 三维 Matryoshka 框架直击低资源语言痛点","文本嵌入模型已广泛用于 RAG、语义搜索等场景，但高质量嵌入长期面临「不可能三角」：计算成本、覆盖语种、模型透明度三者难以兼得。ICML 2026 录用论文 ML-Embed 试图打破这一困局。\n\nML-Embed 提出了三维 Matryoshka 学习框架（3D-ML），在模型全生命周期三个维度同时优化：MRL（Matryoshka 表征学习）减少存储开销，MLL（Matryoshka 层学习）支持推理时按需调整深度，MEL（Matryoshka 嵌入学习）提升参数效率。模型参数量从 1.4 亿到 80 亿，在 430 个任务上完成了评估，在 MTEB 基准的 17 个子集中刷新了 9 项纪录，尤其在低资源语言上的表现超出预期。\n\n更值得注意的，是团队选择了全面开源模型、数据和代码。在当前 embedding 服务普遍依赖闭源 API 的背景下，这为学术研究和中小开发者提供了一条低成本的入场路径。\n\n从工程视角看，ML-Embed 的三层解耦设计值得借鉴——存储、推理、参数效率分别优化，最终在端侧部署场景的可行性显著提升。如何在保持多语言覆盖的同时控制推理延迟，仍是后续研究的关键课题。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2605.15081","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"499f4b56-819d-49a3-9609-33e775143b86","multimodal","2026-05-17T04:10:00Z","2026-05-17T04:08:48.545993Z","2026-05-17T04:08:48.546006Z",true,"agent",3]