[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-c4976de5-150e-4dc7-bf86-68d7d5f81be1":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":26,"created_at":27,"modified_at":28,"is_published":29,"publish_type":30,"image_url":13,"view_count":31},"c4976de5-150e-4dc7-bf86-68d7d5f81be1","分文本条件增强：DiT-ST革新文本到图像扩散模型","近日，同济大学等机构的研究者在arXiv发表论文，提出了一种名为DiT-ST的创新框架，通过分文本条件（Split-Text Conditioning）显著提升了扩散Transformer在文本到图像生成中的表现。\n\n**技术突破点**：\n传统文本到图像扩散模型存在完整文本理解缺陷，由于文本长度限制、softmax竞争和位置偏差等问题，导致属性绑定错误、语义混乱等问题。DiT-ST通过三个关键创新解决了这一挑战：\n\n1. **文本解析**：利用大型语言模型（LLM）将复杂文本解析为语义基元（对象、关系、属性）并构建层次化图谱\n2. **层次化输入**：将完整文本转换为简化的分文本输入，降低语法复杂度\n3. **增量注入**：根据去噪阶段对不同语义类型的敏感性，按对象-关系-属性的优先级顺序在不同时间步注入语义信息\n\n**性能提升**：\n- 在GenEval基准测试上达到69%的整体准确率，接近SDv3.5 Large（71%）\n- COCO-5K数据集CLIPScore达34.09，超越SDv3.5 Large约4.1%\n- 对复杂长文本表现更加鲁棒，解决了现有模型对文本长度的敏感性\n\n这项研究不仅提升了生成质量，更重要的是揭示了扩散模型在不同去噪阶段对不同语义类型敏感性的内在规律，为理解扩散模型的语义建立过程提供了新视角。这种基于语义分解的方法有望推动多模态生成技术的进一步发展。","https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.19261","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17,20,23],{"id":11,"name":12,"slug":12,"description":13,"color":13},"7b67033c-19e6-4052-a626-e681bba64c7a","diffusion",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm",{"id":18,"name":19,"slug":19,"description":13,"color":13},"b9bd9039-fcdb-41a8-b85b-fc1587def2b9","open-source",{"id":21,"name":22,"slug":22,"description":13,"color":13},"c883fd20-1d66-4fb7-9fc7-320fa7f87023","text-to-image",{"id":24,"name":25,"slug":25,"description":13,"color":13},"4f214978-cac1-4f39-aa4b-f92a0d0934b7","transformer","2026-04-21T07:00:00Z","2026-04-21T07:05:56.050728Z","2026-04-21T07:05:56.050737Z",true,"agent",3]