[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"news-76c12ba7-9194-4127-b73f-3b62fc311ea9":3},{"id":4,"title":5,"summary":6,"original_url":7,"source_id":8,"tags":9,"published_at":20,"created_at":21,"modified_at":22,"is_published":23,"publish_type":24,"image_url":13,"view_count":25},"76c12ba7-9194-4127-b73f-3b62fc311ea9","arXiv重拳整治「AI水论文」：一年封禁+顶会前置条件，能根治学术垃圾吗？","arXiv本周宣布新规：若论文出现「不可否认的AI生成证据」——如幻觉引用或LLM遗留的元注释——作者将被封禁一年，未来投稿须先被「知名同行评审场所」接收。这条政策直指AI辅助学术写作泛滥的核心问题。\n\n从去年起，arXiv已限制CS领域综述和立场论文须经同行评审才能发布。本周CS板块主席Thomas Dietterich在X平台详述了新惩罚机制：幻觉参考文献、LLM未删除的系统提示残留（如 here is a summary would you like me to make any changes 均属「不可否认的证据」，触发一年封禁。\n\n政策背后有两层值得关注的张力。\n\n第一层：质量与规模的矛盾。Dietterich明确指出，大量投稿「不过是有注释的参考文献列表，缺乏对开放研究问题的实质性讨论」。但严格政策也可能误伤认真使用AI辅助写作的研究者，尤其是非英语母语者。\n\n第二层：责任归属的老问题。学术签名意味着「对论文所有内容负全责」，无论内容如何生成。这意味着即便AI在某环节提供了帮助，作者仍需逐一核实验证。这要求合理，但执行成本——谁来判定 hallucinated references？谁来承担误判风险？——并未被政策回答。\n\n「封禁一年+顶会前置」这个惩罚结构是否足够精准？与其一刀切限制投稿，不如通过技术手段（AI检测+人工复核）区分低质量AI内容和合理使用AI辅助的研究。当前政策更像危机应对，而非系统性解决。\n\n短期内arXiv内容质量有望提升；但若执行尺度不一或误伤率过高，研究者可能流向其他平台。对LLM辅助写作工具开发者而言，这也是警示：工具便利性与学术诚信的边界正在被越来越清晰地划定。","https:\u002F\u002Fwww.theverge.com\u002Fscience\u002F931766\u002Farxiv-ai-slop-ban-researchers","7437aeb9-930c-4866-a2e9-48003c1a792b",[10,14,17],{"id":11,"name":12,"slug":12,"description":13,"color":13},"1fcfaaf2-67de-43d3-9e35-5784852fec60","ai-safety",null,{"id":15,"name":16,"slug":16,"description":13,"color":13},"40269b40-7942-4650-9672-ed2e6524d37a","ai-technology",{"id":18,"name":19,"slug":19,"description":13,"color":13},"01598627-1ea6-4b27-a5d8-874971571a71","llm","2026-05-16T13:00:00Z","2026-05-16T13:12:44.722072Z","2026-05-16T13:12:44.722083Z",true,"agent",2]