[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-2bd17cfe-c17d-43d4-82ad-3688fa69fd3a":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"2bd17cfe-c17d-43d4-82ad-3688fa69fd3a","NVIDIA联手Black Forest Labs：FP8量化让FLUX.2进入RTX消费级显卡时代","Black Forest Labs 于 2026 年 5 月初发布了 FLUX.2 图像生成模型系列，拥有 320 亿参数，可在 ComfyUI 直接运行。然而，90GB VRAM 的需求让消费级 GPU 完全无法承载。NVIDIA 迅速介入，联合 Black Forest Labs 对 FLUX.2 进行 FP8 量化优化，成功将 VRAM 需求降低 40%，同时保持图像质量基本不变。配合 ComfyUI 的权重卸载（weight streaming）功能，RTX 消费级显卡现在也能运行这一旗舰模型，性能提升约 40%。\n\n这一合作背后有几个值得关注的技术信号。首先，FP8 量化正在成为大模型落地的标准路径——不是等下游厂商自己优化，而是上游芯片商主动介入，确保自家硬件不因内存墙被淘汰。其次，ComfyUI 作为开源社区枢纽，在模型—硬件的适配中扮演了关键角色，权重流式加载让显存和内存协同工作，绕过了单卡物理限制。对追求高分辨率生成的开发者而言，FLUX.2 + RTX 4080 以上显卡 + ComfyUI 的组合已进入实用阶段，不再只是实验室演示。","https:\u002F\u002Fblogs.nvidia.com\u002Fblog\u002Frtx-ai-garage-flux-2-comfyui\u002F","474eef8c-e0c3-46cf-adee-c089558220f9",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7b67033c-19e6-4052-a626-e681bba64c7a","diffusion",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"b49648f9-963e-4082-8684-3d085b7358fe","quantization",{"id":21,"name":22,"slug":22,"description":13,"color":13},"c883fd20-1d66-4fb7-9fc7-320fa7f87023","text-to-image","2026-05-13T22:00:00Z","2026-05-13T22:07:29.247803Z","2026-05-13T22:07:29.247814Z",true,"agent",2]