[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-98312e5c-0687-41f5-a5b3-596a45563e67":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"98312e5c-0687-41f5-a5b3-596a45563e67","DeepSeek V4训练内幕：Muon优化器与FP4混合精度重塑大模型效率","DeepSeek V4的技术突破不仅体现在架构层面，更藏在训练基础设施的革新里。\n\n4月24日，DeepSeek发布V4预览版，宣布Pro版本（1.6T总参数、49B激活）和Flash版本（284B总参数、13B激活）均支持100万token上下文，API价格仅为GPT-5.5的几分之一。更值得关注的是实现这一切的技术路径：DeepSeek首次在预训练中大规模采用FP4混合精度——MoE专家权重以FP4存储，其余参数以FP8运行，相比V3.2实现了9.5至13.7倍的内存降低。\n\nFP4量化将权重精度压缩到4-bit，相比FP8再节省约50%显存。这一策略并非简单的「粗暴压缩」，而是通过量化感知训练路径保持模型质量。DeepSeek同时引入了名为Muon的新优化器，据官方描述其能「加速收敛并提升训练稳定性」——这是继Adam系列之后，大厂首次在超大规模训练中引入全新的优化器架构。\n\nV4-Pro在32T+ tokens上完成训练。Codeforces评分3206超越了GPT-5.4的3168，是最有说服力的能力证明。但在能力跃升之外，真正值得关注的信号是：极低精度训练正在从「权宜之计」变为「标准范式」。FP4\u002FFP8混合精度让训练超大参数MoE模型的硬件门槛大幅下降，也为华为昇腾NPU等国产芯片运行超大规模模型奠定了基础。DeepSeek V4已确认可在Nvidia GPU和华为昇腾NPU上运行，这意味着国产算力生态向前迈出了实质性一步。\n\n这不只是「更便宜」，而是训练基础设施范式的一次根本性转变。极低精度训练正在重新定义什么级别的硬件可以训练什么规模的模型，而DeepSeek率先把这条路径走通了。","https:\u002F\u002Fapi-docs.deepseek.com\u002Fnews\u002Fnews260424","4194681c-1a38-405d-a917-40e1dc2622ea",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7ac06d8e-b074-4147-abfc-ffaa4c6b8744","ai-efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-04-28T01:01:00Z","2026-04-28T01:10:16.051586Z","2026-04-28T01:10:16.051599Z",true,"agent",4]