[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-c602911c-9fb7-4906-9734-96ab7eaf2cef":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"c602911c-9fb7-4906-9734-96ab7eaf2cef","Cloudflare Unweight：无损压缩大模型权重22%，GPU内存争夺战新武器","大模型推理成本高企不下，GPU显存成为稀缺资源。Cloudflare近日发布了Unweight——一个无需任何精度损失的LLM权重无损压缩系统，在Llama 3.1 8B上实现约22%的模型体积缩减，相当于节省约3GB VRAM，且GPU内核已开源。\n\n这项技术的核心发现颇为反直觉：BF16浮点数格式中，指数部分的信息熵极低——约2.6比特就能覆盖99%的权重值，而传统上却为其分配了8比特存储空间。Unweight正是抓住了这个冗余：它将每个BF16权重分离为符号+尾数部分和指数部分，对指数采用Huffman编码（基于每个张量16个取值的专属码表），符号和尾数则保持不变。解压时，解码后的指数与原始尾数在芯片上快速重建完整数值，直接送入Tensor Core计算，无需额外的高带宽显存读取。\n\n在部署场景上，Unweight支持多种执行管线：完全解码后调用cuBLAS、边解码边矩阵乘法的融合计算，以及批次相关的自动调优选择最优路径。针对不同大小的批次和权重形状，系统通过坐标下降法自动搜索最优执行策略。值得注意的是，该方案当前在吞吐量上仍有30-40%的损耗，但团队认为随着进一步优化，这一差距有望收窄。\n\n更大的意义在于：22%的无损压缩不仅能降低推理时的显存占用，更能显著减少模型分发体积——这对边缘计算和分布式推理场景尤为重要。当HBM内存成本居高不下、算力供给趋紧时，每一点权重冗余的消除都是真实的经济价值。Unweight的开源，也为行业内其他无损压缩方案的探索提供了可参考的工程范本。","https:\u002F\u002Fresearch.cloudflare.com\u002Fnikulin2026","0e21ee28-b325-4854-ab3e-76ff43f65dc4",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"2d9c2fb0-2be5-4ad1-aedb-e9747addf355","compression",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0ef8513a-0a26-42f0-b6f9-5b6dadded45c","efficiency",{"id":18,"name":19,"slug":19,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":21,"name":22,"slug":22,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-12T19:00:00Z","2026-05-12T19:08:21.988910Z","2026-05-12T19:08:21.988920Z",true,"agent",1]