探索与争鸣 ›› 2025, Vol. 1 ›› Issue (12): 76-84.

• 学术争鸣 • 上一篇    下一篇

人工智能法治中“系统测评”的应用限度——与苏宇教授商榷

刘瑞强   

  • 出版日期:2025-12-20 发布日期:2025-12-20
  • 作者简介:刘瑞强,中国政法大学习近平法治思想研究与国际传播中心研究员。(北京 100088)

The Application Limits of “System Evaluation” in AI Governance: Deliberate with Professor Su Yu

Liu Ruiqiang   

  • Online:2025-12-20 Published:2025-12-20

摘要: 从“算法解释”到“系统测评”,标志着人工智能法治基础信息工具的变革。“系统测评”在弥补算法解释机制局限、构建可信人工智能生态方面具有显著优势。然而,“系统测评” 在实际应用中仍面临公平性失真、性能误导及技术垄断等风险,在技术尚未成熟、标准 尚未统一的背景下,不宜贸然将其全面法治化。当下的妥善方案在于,审慎界定“系统 测评”在人工智能法治框架中的“底线保障”角色,通过分类分级实施、构建权威技术标准、 确立独立透明的认证机制等路径,以“软法”先行、渐进规制的策略推动其与法治体系 良性融合,实现治理效能与创新保护的平衡。

关键词: 系统测评, 人工智能治理, 法治限度, 技术标准

Abstract: The transition from “algorithm explanation” to “system evaluation” marks a paradigm shift in the foundational information tools for AI legal governance. System evaluation offers substantial advantages in compensating for the limitations of algorithm explanation mechanisms and fostering a trustworthy AI ecosystem. However, its practical deployment encounters multiple risks, such as fairness distortion, performance deception, and technological monopolies. Under the current conditions of immature technology and fragmented standards, a premature rush towards comprehensive legalization of system evaluation is ill-advised. The appropriate strategy at present is to cautiously define the role of “system evaluation” as a “baseline safeguard” within the AI legal governance framework. Through tiered implementation, the establishment of authoritative technical benchmarks, and the creation of independent and transparent certification systems, a phased approach prioritizing “soft law” and incremental regulation can facilitate its seamless integration with the legal system, thereby achieving a harmonious balance between governance efficacy and innovation safeguarding.

Key words: system evaluation, AI governance, legal governance limits, technical standards