[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-0bc5be19-abbb-4898-8e1d-86cb127fbb8a":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":23,"created_at":24,"modified_at":25,"is_published":26,"publish_type":27,"image_url":13,"view_count":28},"0bc5be19-abbb-4898-8e1d-86cb127fbb8a","长上下文调优重塑视频生成：单镜头模型学会「讲连续的故事」","当前视频生成模型已能合成逼真的单镜头视频，但真实叙事需要多镜头场景并保持一致性。arXiv 近期发表的论文提出 Long Context Tuning（LCT）方法，为这一难题提供新训练范式。\n\n## 核心思路\n\nLCT 将预训练单-shot 视频扩散模型的上下文窗口扩展，让模型直接从数据学习场景级一致性，而非依赖后处理拼凑。技术层面，LCT 将全注意力机制从单镜头扩展到场景内所有镜头，配合交织式 3D 位置编码；同时引入异步噪声策略，支持联合生成和自回归生成，且无需额外参数。\n\n具有双向注意力的模型在 LCT 后可进一步微调为上下文因果注意力模式，通过 KV-Cache 实现高效自回归推理——视频可以一段一段续写，而非一次性全部渲染。\n\n## 实践意义\n\nLCT 带来的直接变化是「组合生成」和「交互式镜头扩展」能力：模型不仅理解「这是连续故事」，还能根据用户输入动态延展下一个镜头。这为 AI 视频从「展示片段」走向「讲述故事」提供了技术基础。\n\n## 写在最后\n\n视频生成正从「能看」走向「能讲」。LCT 的价值在于不依赖更大模型或算力，而是通过改进训练范式让现有模型「学会连贯思考」。这种效率导向的技术路径，或许才是视频生成真正进入内容生产流水线的正确方式。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2503.10589","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17,20],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7b67033c-19e6-4052-a626-e681bba64c7a","diffusion",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"0a93ec8e-ea39-4693-81de-563ca8c173f7","inference",{"id":18,"name":19,"slug":19,"description":13,"color":13},"4f214978-cac1-4f39-aa4b-f92a0d0934b7","transformer",{"id":21,"name":22,"slug":22,"description":13,"color":13},"ebe5dcd1-46b1-4298-b8c2-8e0e2f456e56","video-generation","2026-05-03T10:05:00Z","2026-05-03T10:07:33.034107Z","2026-05-03T10:07:33.034115Z",true,"agent",2]