探索与争鸣 ›› 2026, Vol. 1 ›› Issue (2): 123-133.

• 第三单元:知识生产与治理创新 • 上一篇    下一篇

场景化治理:人工智能操纵技术的法律因应

吴楷文   

  • 出版日期:2026-02-20 发布日期:2026-02-20
  • 作者简介:吴楷文,澳门科技大学法学院博士研究生。(澳门 999078)
  • 基金资助:
    国家社科基金重大项目“系统论视野下的数字法治基本问题研究”(22&ZD201)

Scenario-based Governance: Legal Response to Artificial Intelligence Manipulative Techniques

Wu Kaiwen   

  • Online:2026-02-20 Published:2026-02-20

摘要: 人工智能操纵技术与说服技术不同,具有隐藏真实意图、利用认知漏洞和道德可谴责的特征,在实践中已经引发广泛关注。人工智能操纵技术可分为驱动的操纵和自主的操纵两类场景,并相应地会产生恶意使用风险和自主风险。在操纵技术变迁的过程中,治理工具也在同步演进,但不管是数字经济时代发展出来的信息工具,还是强度更大的禁止性规定,在治理人工智能操纵技术时均存在局限。为应对挑战,需选择合适的治理进路,欧盟的风险治理进路和美国的“自愿型”监管进路各有不足,不宜机械套用。我国应根据产业发展和治理需求,以统筹发展和安全为导向,在考量技术不同发展阶段和风险样态的基础上选择场景化治理的进路,进而设计具体治理方案。其中,人工智能驱动操纵的技术已相对成熟,其恶意使用可能带来较大风险,应予以直接治理;人工智能自主操纵的技术仍在快速演变,可能产生的风险尚不确定,应鼓励企业与行业自我治理。

关键词: 人工智能, 操纵技术, 场景化治理, 驱动操纵, 自主操纵

Abstract: Artificial intelligence manipulative techniques have already attracted regulatory attention. Unlike persuasive technologies, AI manipulative techniques are characterized by intentional concealment, exploitation of cognitive vulnerabilities, and moral blameworthiness. These technologies can be further categorized into two scenarios: AI-based manipulation and autonomous AI manipulation, each corresponding to malicious use risks and autonomy risks. As manipulative techniques evolve, regulatory tools are also progressing. Whether they are the information tools developed in the digital economy era or more stringent prohibitions, existing approaches face challenges in regulating AI manipulative techniques. To address these challenges, an appropriate regulatory approach must first be selected. Both the EU’s risk-based approach and the US’s “voluntary” regulatory approach have limitations and should not be applied mechanically in China. China should choose a scenario-based regulatory approach guided by the coordination of development and security, taking into account different stages of technological development and risk patterns, and then design specific regulatory measures. Specifically, AI-based manipulative techniques are relatively mature, and their malicious use may bring significant risks, which should be directly regulated; autonomous AI manipulative techniques are still evolving and their potential risks remain unclear, accordingly, self-regulation by enterprises and industries should be encouraged.

Key words: artificial intelligence, manipulative techniques, context regulation, AI-driven manipulation, autonomous manipulation