成果推介:我院曹建峰副教授、郭丹阳助理教授分别在《德国法律杂志》第26卷发表英文论文
2026-05-13信息来源:
编辑:审核:



From Principle to Practice: Value Alignment in AI Ethics and Governance

(从原则到实践:人工智能伦理与治理中的价值对齐)

曹建峰

深圳大学法学院


摘要:As China rapidly advances in AI innovation and development, especially in frontier AI, its regulatory and ethical frameworks are under increasing pressure to ensure that technological progress aligns with human interests and societal values. This Article argues that AI value alignment—the process of ensuring AI systems act in accordance with human values, norms, and ethical principles—should be adopted as a strategic pillar in China’s evolving AI governance architecture. While China has already established a comprehensive legal, ethical, and self-regulatory landscape to address AI risks, these mechanisms often rely on reactive enforcement and external compliance. In contrast, AI value alignment offers a proactive, intrinsic approach that embeds safety and ethical constraints directly into AI systems, making them safer, more trustworthy, and responsive to human needs.

This study begins by mapping China’s current AI governance landscape, including national legislation such as the Cybersecurity Law, Personal Information Protection Law, and a growing set of regulations targeted at algorithms and generative AI. It also evaluates China’s normative commitments, such as the “human-centric” and “tech for good” principles articulated in national policy documents, and the increasing role of corporate self-regulation among major technology firms. While commendable in scope and ambition, these governance mechanisms often fall short in ensuring that AI behavior aligns with safety constraints and ethical intent—particularly when AI systems (such as agentic AI) become more autonomous and capable. This gap highlights the urgent need for a systematic value alignment strategy.

The Article then delves into the conceptual and technical foundations of AI value alignment, identifying both engineering challenges—such as reward misspecification, data bias, and model deception—and normative dilemmas, including moral pluralism, value aggregation, and dynamic ethics. Special attention is paid to frontier models like large language models and artificial general intelligence (AGI), which pose alignment challenges at a scale previously unseen. Drawing on contemporary alignment techniques such as RLHF (Reinforcement Learning from Human Feedback) and principle-based alignment, such as Anthropic’s Constitutional AI, the Article explores their limitations and calls for a more diversified, interdisciplinary, and forward-looking alignment research agenda.

Finally, the Article offers a roadmap for operationalizing AI value alignment across three key governance domains: Law and regulation, ethical norms, and industry self-regulation. Recommendations include the incorporation of alignment assessments into regulatory filings, the development of technical standards for value alignment and ethics-by-design guidelines, and institutional investments in safety and alignment research. The Article concludes by asserting that value alignment is not merely a technical safeguard but a governance imperative for the age of autonomous AI and agentic AI. By integrating alignment into its AI governance strategy, China can not only enhance domestic safety and public trust but also better coordinate with global AI ethics and safety initiatives—ultimately contributing to the shared goal of human-aligned and beneficial artificial intelligence.

随着中国在人工智能创新与发展领域的快速推进,其监管与伦理框架正承受日益增长的压力,需要确保技术进步与人类利益及社会价值相一致。本文主张,应将人工智能价值对齐——即确保人工智能系统按照人类价值观、规范与伦理原则行事的过程——作为中国不断演进的人工智能治理架构中的战略支柱。尽管中国已建立起较为全面的法律、伦理与行业自律体系以应对人工智能风险,但这些机制往往依赖于被动执法与外部合规。相比之下,人工智能价值对齐提供了一种主动的、内在的进路,将安全与伦理约束直接嵌入人工智能系统之中,使其更加安全、可信,并更能回应人类需求。

本文首先梳理中国当前的人工智能治理图景,包括《网络安全法》《个人信息保护法》等国家立法,以及针对算法和生成式人工智能日益增加的一系列规制。文章亦评估了中国在规范层面的承诺,如国家政策文件中阐述的“以人为本”与“科技向善”原则,以及大型科技企业在自律治理中日益增强的作用。这些治理机制虽在覆盖范围与目标雄心上值得肯定,但在确保人工智能行为符合安全约束与伦理意图方面仍显不足——尤其是当人工智能系统(如智能体)变得更加自主和强大时。这一差距凸显了构建系统性价值对齐战略的迫切必要性。

接着,本文深入探讨人工智能价值对齐的概念与技术基础,识别其所面临的工程挑战(如奖励错配、数据偏见与模型欺骗)以及规范层面的困境(包括道德多元主义、价值聚合与动态伦理)。文章特别关注大语言模型和通用人工智能(AGI)等前沿模型,因为它们带来了前所未有的对齐挑战。通过借鉴当代对齐技术,如基于人类反馈的强化学习(RLHF)以及原则驱动的对齐方法(如Anthropic原则性AI”),本文揭示其局限性,并呼吁建立更具多元性、跨学科性与前瞻性的对齐研究议程。

最后,本文围绕三大关键治理领域——法律法规、伦理规范与行业自律——提出了人工智能价值对齐的可操作路线图。具体建议包括:将对齐评估纳入监管申报、制定价值对齐的技术标准与伦理嵌入设计指南,以及对安全与对齐研究进行机构性投入。本文最终指出,价值对齐不仅是一种技术保障,更是自主人工智能与智能体时代的治理要务。通过将价值对齐纳入人工智能治理战略,中国不仅能够提升国内安全水平与公众信任,还能更好地与全球人工智能伦理与安全倡议相协调——最终为实现与人类对齐且有益的人工智能这一共同目标作出贡献。

关键词:前沿人工智能(Frontier AI);人工智能治理(AI Governance);人工智能安全(AI Safety);价值对齐(Value Alignment);伦理嵌入设计(Ethics by Design);负责任人工智能(Responsible AI



Digital Alchemy? Rethinking Copyright in the Age of AI-Generated Content: 

Lessons and Reflections from the AI Value Chain 

数字炼金术?在人工智能生成内容时代重新思考著作权制度:来自AI价值链的启示与反思

张欣 对外经贸大学、樊竟合 牛津大学法学院、郭丹阳 深圳大学(通讯作者)


摘要:The emergence of generative artificial intelligence (generative AI) has profoundly reshaped content creation by lowering marginal costs, altering the role of human creators, and restructuring industrial processes. These shifts pose three critical challenges for intellectual property (IP) frameworks: redefining creative behaviors in human–AI collaborations, sustaining incentives for innovation, and addressing gaps in copyright mechanisms. Against this backdrop, the Beijing Internet Court (BIC) issued China’s first judicial ruling on the copyrightability of AI-generated images in 2023. The court held that AI-generated content (AIGC) could qualify for copyright if there is demonstrable human intellectual input and originality. However, the ruling also emphasized the need for case-by-case assessments, particularly in hybrid human–AI creative processes. This landmark decision has sparked intense debate over whether and how AIGC should be considered as a “work” under copyright law, with particular focus on requirement of human authorship and the threshold of originality. In addition, a systematic analysis of the AIGC value chain highlights the far-reaching implications of human authorship claims for various stakeholders, including creators, AI developers, prompt engineers, and end users. To address these emerging legal and ethical issues, this Article proposes two key reforms: first, extending the scope of fair use under Article 24 of China’s Copyright Law to account for AI-generated works, and second establishing a “whitelist” mechanism in the Regulations for the Implementation of the Copyright Law. These measures aim to balance the protection of human creativity, ensure equitable value distribution among stakeholders, and foster sustainable innovation in the AI-driven creative ecosystem.

生成式人工智能(Generative AI)的兴起,通过显著降低内容生产的边际成本、重塑人类创作者的角色以及重构产业组织方式,深刻改变了内容创作的格局。这些变化为知识产权(IP)制度带来了三项关键挑战:其一,如何在人机协作的背景下重新界定创造性行为;其二,如何在人工智能生成内容(AIGC)快速增长的环境中维持创新激励机制;其三,如何弥补现有著作权制度在数据获取与成果归属方面的制度空缺。

在此背景下,北京互联网法院于2023年作出中国首例关于人工智能生成图像著作权性的判决。该判决认为,在能够证明存在人类智力投入与独创性的情况下,人工智能生成内容可以获得著作权保护。同时,法院亦强调,在人类与人工智能混合参与的创作情形中,应当坚持个案分析原则。该具有里程碑意义的判决引发了广泛争论,核心问题在于AIGC是否以及如何被认定为著作权法意义上的作品,尤其涉及人类作者要件以及独创性标准的认定。

此外,从人工智能生成内容的价值链视角出发的系统性分析表明,人类作者资格的认定对多元主体产生深远影响,包括原始内容创作者、AI模型开发者、提示工程师以及终端用户等。为应对上述新兴法律与伦理问题,本文提出两项制度性改革建议:其一,扩展我国《著作权法》第24条关于合理使用的适用范围,将人工智能生成内容纳入其规制框架;其二,在《著作权法实施条例》中建立白名单机制,以规范人工智能数据挖掘与训练行为。上述制度安排旨在在保护人类创造力的同时,实现不同主体之间的利益平衡,并促进人工智能驱动的创意生态系统的可持续发展。

关键词:生成式人工智能;著作权保护;人工智能价值链;人工智能生成内容(AIGC);合理使用;转化性使用


期刊简介:

《德国法律杂志》(German Law JournalISSN 2071-8322)创刊于2000年,由德国法兰克福歌德大学创办,自2019年起由剑桥大学出版社出版,是一份同行评审的开放获取国际性法学期刊,聚焦比较法、欧盟法和国际法领域的前沿研究,致力于推动跨国法律学术交流。在学术影响力方面,该刊被Web of Science新兴资源引文索引(ESCI)及Scopus数据库收录:2024年期刊影响因子(JIF)为1.2,在Clarivate《期刊引证报告》(JCR)法学类444种期刊中排名第119位;2024CiteScore3.1,在Scopus法学类1105种期刊中位列第175位,处于Q1区;H指数达42。据Google Scholar Metrics统计,该刊在欧洲法国际法两个类别中排名全球第2,在综合法律类别中排名全球14Washington & Lee Law Journal Rankings同时将其列为欧洲法领域全球第1,是国际公认的欧洲法与跨国法领域顶级学术阵地之一


论文链接:

https://www.cambridge.org/core/journals/german-law-journal/issue/55BE2C2DF907B460098406C13AA38C0D