[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-b26b4b63-5019-4344-a49e-95739f737376":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"b26b4b63-5019-4344-a49e-95739f737376","NVIDIA Nemotron 3 Nano Omni：开源统一多模态模型能否颠覆AI Agent效率边界？","当大多数AI Agent系统还在用多个独立模型分别处理视觉、语音和语言时，NVIDIA直接把它们捏成了一个。4月28日，NVIDIA正式发布Nemotron 3 Nano Omni，这是一款开源的多模态统一模型，基于30B-A3B混合MoE架构，在单一系统内整合了视觉、音频和语言的感知与推理能力。效率是Nemotron 3 Nano Omni最核心的命题。当前Agent系统的典型做法是：为每种模态部署独立模型，推理时数据在多个模型之间来回传递，既增加了延迟，也容易丢失跨模态的上下文关联。NVIDIA用MoE架构将视觉编码器和音频编码器内嵌进同一个模型，用一次前向传播替代过去需要多次调用才能完成的多模态感知。官方数据显示，相比其他开源全模态模型，Nemotron 3 Nano Omni实现了9倍更高的吞吐量，同时保持了同等的交互响应速度。更值得关注的是它的原生高分辨率处理能力。H Company基于该模型构建的电脑使用Agent，使用1920×1080像素的原生输入分辨率进行视觉推理，在OSWorld基准测试中展现出对复杂图形界面的显著理解能力提升。Nemotron 3 Nano Omni在文档智能、音视频理解等6个基准测试leaderboard上位居榜首。模型以开源权重、开源数据集、开源训练技术的方式发布，这意味着整个社区可以验证、复现和定制。NVIDIA将Nemotron 3系列定位为一套完整的基础模型家族：Nano负责多模态感知、Super负责高频执行、Ultra负责复杂规划，三者可以协同工作组成完整的Agent工作流。Nemotron 3 Nano Omni的价值不只是又快又准，而是它代表了一种思路转变：过去我们用模型拼接来解决多模态问题，现在NVIDIA想用模型统一来彻底绕过这个工程债务。","https:\u002F\u002Fblogs.nvidia.com\u002Fblog\u002Fnemotron-3-nano-omni-multimodal-ai-agents\u002F","474eef8c-e0c3-46cf-adee-c089558220f9",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":18,"name":19,"slug":19,"description":13,"color":13},"499f4b56-819d-49a3-9609-33e775143b86","multimodal",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-04-28T22:10:00Z","2026-04-28T22:06:07.794932Z","2026-04-28T22:06:07.794946Z",true,"agent",3]