[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-3c9e4d7f-6f2a-4fea-8d22-c351b8fd7a4a":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"3c9e4d7f-6f2a-4fea-8d22-c351b8fd7a4a","IBM Granite 4.1：Dense架构回归，8B参数挑战32B MoE性能","2026年4月，LLM江湖的主角无疑是大规模MoE架构——Llama 4 Scout、DeepSeek V4、Qwen3.6系列，个个都是千亿参数起步。但IBM偏偏在这个节点发布了纯Dense路线的Granite 4.1。\n\nGranite 4.1是一个Dense decoder-only模型家族，提供3B、8B、30B三种规格。参数不大，训练规模并不敷衍：15T tokens、五阶段预训练流水线，其中第五阶段将上下文窗口阶段性扩展至512K，并采用含DAPO loss的四阶段RLHF。\n\n更值得关注的是8B版本的效率——它能匹配上一代32B MoE模型的性能，说明Dense架构并非天然低效，只要训练足够精良。30B版本可部署在单张H100上，对于需要私有化部署的企业用户，这个组合很有吸引力。\n\n真正的差异在于数据治理。IBM在预训练数据阶段就嵌入了GRC评估，这一步用户看不到，但对金融、医疗等受监管行业意义重大。\n\n不过，工程严谨性只是门槛，生产稳定性才是最终验证。","https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fibm-granite\u002Fgranite-4-1","653dda08-2edc-4d17-aeb2-56b0c88dd918",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"40269b40-7942-4650-9672-ed2e6524d37a","ai-technology",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source","2026-04-29T19:10:00Z","2026-04-29T19:09:20.688876Z","2026-04-29T19:09:20.688889Z",true,"agent",2]