As AI technology transcends borders, the frameworks governing it also evolve. The European Union's AI Act and China’s AI Governance Framework 2.0 (a bilingual version just released in September 2025) present two distinct philosophies of risk management and ethical oversight in AI. By delving into these differences, we can uncover valuable lessons that inform enterprises' approaches to developing their own privacy management and AI governance frameworks in an increasingly interconnected world, and thus support enterprises in expanding their businesses into these regions whenever required.
Risk Classification Standards
The EU AI Act approaches risk classification through a technology-centric, tiered system. It classifies AI systems into four categories: prohibited, high-risk, limited-risk, and minimal risk, based on their intended use and societal impact. Oversight and requirements increase with risk level, focusing on technology’s properties and their potential effects on fundamental rights or safety.
In contrast, China’s AI Governance Framework 2.0 adopts a dynamic, multidimensional model to classify and grade risks based on application scenario, intelligence level, and scale. It identifies inherent technological, application technological, and derivative risks along with sectoral and operational impacts. AI systems are graded into five risk levels – Low, Moderate, Considerable, Major, and Extremely Serious – aligning with the general AI or security incident response levels. Security assessments and audits are required based on the risk level.
Accountability and AI Safety Approval Approach
The EU AI Act outlines key roles with specific compliance responsibilities:
- Providers: Develop or place AI systems in the EU market
- Deployers: Use AI systems in a professional context
- Importers: Introduce AI systems from outside the EU
- Distributors: Make AI systems available in the EU market
- Product Manufacturers: Incorporate AI systems into their products
- Authorized Representatives: Act for non-EU Providers in the EU
Each role has obligations regarding safety, transparency, risk management, record-keeping, and cooperation with authorities.
The Act emphasizes accountability for distributors and authorized representatives to ensure only safe and compliant conformity assessment products are sold. An enforcement timeline from February 2025 to August 2027 is defined.
The China AI Governance Framework 2.0 defined four key roles:
- Developers: Design AI models and algorithms
- AI service providers: Deploy and operate AI systems
- System/Application users: Utilize AI systems
- Government authorities: Supervise compliance
Each role has specific safety responsibilities throughout the AI lifecycle. Developers prioritize inherent safety in their design and technology choices. Providers must ensure secure deployment and respond to operational risks. Users are responsible for safe usage and risk reporting, while government authorities oversee compliance and establish regulations.
Since 2024, the Cyberspace Administration of China (CAC) has mandated that both developers and AI service providers of any public-facing AI system must file and obtain approval before launching. CAC will announce the approved filing numbers and enforce compliance proactively through measures like system shutdowns and fines.
Privacy Governance and Personal Data Restrictions
The EU AI Act itself does not impose explicit cross-border data transfer restrictions for personal privacy, but requires compliance with the EU GDPR when personal data is involved in AI systems.
In contrast, China’s AI Governance Framework 2.0 incorporates stringent data security and privacy protection measures into AI governance throughout the data lifecycle, promoting data sanitization and anonymization techniques, such as the use of synthetic data, to minimize reliance on personal data for AI training.
Supply Chain and Model Provenance Verification
The EU AI Act requires providers of general-purpose AI models to maintain detailed technical documentation on the model’s architecture, training processes, and data provenance, including the origin, collection methods, and preprocessing of training data. This promotes transparency, allowing regulators and downstream users to verify the authenticity and integrity of AI models, prevent tampering, and ensure traceability throughout the AI lifecycle.
The China AI Governance Framework 2.0 also emphasizes similar technical documentation requirements and explicitly mandates that AI data be watermarked or annotated. While it is well-known that AI watermarking technologies are still maturing, the framework recommends implementing quality control methods like cross-annotation and result audits to improve labelling accuracy and minimize bias.
Additionally, the framework established guidelines for utilizing open-source models, codes, and libraries, and required developers to conduct security audits of open-source code and development frameworks to identify and address vulnerabilities. This approach aligns with the Software Bill of Materials (SBOM) approach, focusing on transparency about components used, documentation of origins and licenses, continuous vulnerability monitoring, and communication of supply chain risks.
“Human-in-Control” Mandate at System Design Level
The EU AI Act mandates that high-risk AI systems include effective human-machine interface tools, allowing humans to oversee the systems. This ensures that risks to health, safety, and fundamental rights are minimized by enabling human monitoring or intervention.
Similarly, China’s AI Safety Governance Framework 2.0 promotes proactive risk identification, continuous monitoring, emergency response mechanisms, and human review throughout the AI lifecycle. However, going further with details on “Human-in-Control” also requires designing safety control thresholds and safety stop (or switch to manual) switches for human intervention, in addition to technical safety guardrails, to prevent uncontrollable AI operation.
| Category | EU AI Act (with GDPR context) | China AI Framework 2.0 |
|---|---|---|
| Risk classification | Prohibited; High-Risk; Limited Risk; Minimal Risk | Low; Moderate; Considerable; Major; Extremely Serious |
| Accountability & roles | Providers, Deployers, Importers, Distributors, Manufacturers, Authorized Representatives | Developers, AI Service Providers, System/Application Users, Government Authorities |
| Enforcement signals | Compliance timelines; conformity assessments; regulator cooperation | Filings for public-facing AI; CAC enforcement actions as needed |
| Privacy focus | GDPR-aligned; privacy embedded in risk/transparency controls | Lifecycle privacy: strong data protection, minimization, sanitization, and synthetic data |
| Cross-border data | GDPR mechanisms govern transfers in AI contexts | Data governance/localization considerations; strong lifecycle protections |
| Data provenance & supply chain | Model provenance (architecture, training data, preprocessing) | Similar plus data watermarking/annotation; open-source security audits; SBOM-style visibility |
| Human-in-the-loop / safety | High-risk systems require human oversight and intervention | Continuous monitoring; safety thresholds; manual overrides |
| Practical implications for enterprises | Map roles to functions; DPIAs; lifecycle controls; plan for conformity | Lifecycle risk management; mandatory filings; regulatory liaison; watermarking and audits |
Ensuring Responsible and Ethical Advancement of AI
The AI governance frameworks of the European Union and China reflect their distinct regulatory philosophies, focusing on technology's impact on fundamental rights in the EU and a multi-dimensional assessment of risks in China. These frameworks emphasize accountability across the AI lifecycle and integration of privacy governance, which is essential for enterprises navigating these markets. By understanding and adopting these approaches, businesses can ensure compliance and contribute to the responsible and ethical advancement of AI, fostering sustainable innovation in a connected world.