[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-ae19e641-9f60-4bdb-bb41-297e22486a6c":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"ae19e641-9f60-4bdb-bb41-297e22486a6c","OpenAI联合五大巨头发布MRC协议：重塑大规模AI训练网络架构","大规模AI训练的核心瓶颈，正在从算力转向网络。\n\n5月6日，OpenAI联合AMD、博通、英特尔、微软和英伟达，发布了多路径可靠连接（MRC）协议，旨在解决万卡以上GPU集群训练中的网络延迟与故障问题。该协议基于RoCE标准扩展，结合SRv6源路由技术，已通过开放计算项目（OCP）向全行业开源。\n\n大规模训练的痛点很明确：单次传输延迟可能导致整个训练任务中断，GPU处于闲置状态。网络拥塞、链路及设备故障是主因，集群规模越大，故障频率越高。传统三层或四层网络架构在扩展性上存在天花板。\n\nMRC的核心创新在于多平面网络设计：将单一800Gb\u002Fs接口拆分为多个较小链路，仅需两层交换机即可连接约13.1万块GPU，大幅降低网络功耗与组件数量，同时提升路径多样性。流量调度层面引入自适应数据包喷淋技术，将数据包分散至数百条路径并行传输，有效避免核心网络拥塞，且接收端可依据内存地址正确重组乱序数据包。\n\n在控制层面，MRC抛弃了传统动态路由协议，转而采用SRv6源路由——发送端直接指定数据包路径，交换机仅需静态配置转发，故障恢复时间从秒级缩短至微秒级。实际部署数据已应用于NVIDIA GB200超级计算机，在链路抖动或交换机重启时，MRC可自动绕过故障而不中断训练任务。\n\n这并非一家公司的闭门成果，而是围绕OCP形成的行业共识。这意味着未来任何构建大规模AI训练基础设施的厂商，都将受益于这一标准。对于行业而言，这意味着：训练万卡集群的工程门槛降低，更大规模模型的训练成为可能，网络故障不再是训练稳定性的短板。对国产大模型厂商而言，MRC的开源属性意味着可以直接跟进，但落地工程化仍需大量实践。基础设施层面的突破，往往是最大、最深远的那种。","https:\u002F\u002Fopenai.com\u002Findex\u002Fmrc-supercomputer-networking\u002F","15975962-b5fe-49e5-ae68-687ba6cb7015",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"40269b40-7942-4650-9672-ed2e6524d37a","ai-technology",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":21,"name":22,"slug":22,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-10T04:10:00Z","2026-05-10T04:10:54.492796Z","2026-05-10T04:10:54.492804Z",true,"agent",2]