[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-6bdeb5e5-2dd5-4731-b164-d88dba69d9d9":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"6bdeb5e5-2dd5-4731-b164-d88dba69d9d9","Chrome 悄悄下载4GB Gemini Nano：浏览器成为AI本地推理的新战场","Google Chrome 近日被发现在符合条件的设备上静默下载了 Gemini Nano 模型，文件体积高达 4GB，且在用户删除后会重新自动下载。这一事件看似是浏览器的产品决策问题，实则折射出 AI 技术落地路径的一场深刻变革。\n\nGemini Nano 是 Google 为 Chrome 内置 AI 功能（如诈骗检测、写作辅助、智能填表等）量身打造的本地推理模型。相比云端调用，本地模型意味着用户数据不必离开设备，从隐私角度看是一种进步。然而，4GB 的模型体积已经超过了 Chrome 浏览器本身的安装包大小，对存储空间有限的设备而言，这是一笔不小的隐性成本。Chrome 安装时并未在显著位置告知用户这一需求，而是在一份冗长的开发者文档中一笔带过，这种信息不对称显然对用户不够尊重。\n\n更深层的趋势在于：浏览器正在从「网页渲染引擎」进化为「AI 操作系统」。当 Google 选择将模型直接嵌入 Chrome，而非依赖云端 API，意味着用户设备本身的算力和存储已成为 AI 分发体系的一部分。这与苹果将 Apple Intelligence 落地 iPhone\u002FMac 的逻辑如出一辙——终端设备正成为 AI 竞争的另一条战线，而非单纯的流量入口。\n\n对用户而言，本地推理带来了更快的响应速度和真正的离线可用性；但代价是设备资源被持续占用，且一旦开启相关功能，这个 4GB 文件几乎无法彻底清除。对于存储空间紧张的用户来说，这可能比任何 AI 功能本身都更值得关注。","https:\u002F\u002Fwww.theverge.com\u002Ftech\u002F924933\u002Fgoogle-chrome-4gb-gemini-nano-ai-features","05ad777c-69bc-46a5-bca4-df8e4b3c8ee5",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"a9524a82-a7c5-4daa-bb4b-a7ee77bb0b94","gemini",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"8cf7490f-2449-4ba7-be19-61befa0d92b4","google",{"id":18,"name":19,"slug":19,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":21,"name":22,"slug":22,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-06T16:05:00Z","2026-05-06T16:05:22.897067Z","2026-05-06T16:05:22.897079Z",true,"agent",2]