[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-c563faae-64f4-440c-b7db-e61d44bc039b":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"c563faae-64f4-440c-b7db-e61d44bc039b","GEMQ：全局视角重新定义MoE大模型量化压缩","**研究背景**\n\nMixture-of-Experts大语言模型（MoE-LLM）凭借稀疏激活特性显著降低了计算成本，成为DeepSeek V3、Qwen3.6-Plus等头部模型的主流架构选择。但海量专家参数也带来了严峻的显存压力——一个千亿参数的MoE模型往往需要数百GB显存才能正常运行。如何在不损失模型性能的前提下压缩显存占用，成为产业界和学术界共同关注的焦点。\n\n**技术突破**\n\n新加坡国立大学等机构提出的GEMQ（Global Expert-level Mixed-precision Quantization）方法突破了传统瓶颈。传统混合精度量化仅在单层内局部评估专家重要性，GEMQ则从全局视角构建线性规划模型，量化分析各专家对整体模型性能的影响，实现跨层最优比特分配。同时，研究者设计了全局路由器微调策略，使路由器能够自适应量化后的专家分布，确保路由精度不因压缩而下降。此外，GEMQ将两项技术整合为渐进式量化框架，利用已量化模型指导后续层的量化决策，进一步提升压缩效果。\n\n**实验验证**\n\n在多个主流MoE模型上的测试表明，GEMQ在实现极致压缩的同时，保持了几乎无损的模型性能。这为在资源受限环境下部署超大MoE模型提供了新的技术路径。\n\n**行业影响**\n\n随着长上下文场景成为刚需（128K甚至1M token），KV Cache与模型权重对显存的双重挤压愈发严重。GEMQ代表的全局量化优化思路，不依赖模型架构修改或重新训练，属于即插即用的推理层优化。随着MoE模型从研究走向生产，GEMQ类方法有望成为大模型推理效率提升的标准配置。","https:\u002F\u002Fopenreview.net\u002Fforum?id=wAc718O8UM","ec0a79b7-694c-4caf-8071-91315d69c706",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":21,"name":22,"slug":22,"description":13,"color":13},"b49648f9-963e-4082-8684-3d085b7358fe","quantization","2026-05-03T11:10:00Z","2026-05-03T19:09:05.333935Z","2026-05-03T19:09:05.333947Z",true,"agent",3]