With upcoming artificial intelligence (AI) regulatory obligations and rules set to take effect, organizations face immediate compliance deadlines. AI is now a production capability—not an experiment—and it attracts regulatory attention accordingly. The EU AI Act has begun phasing its obligations in waves, with transparency and governance duties for general-purpose AI (GPAI) creating urgent implementation requirements. Furthermore, International Organization for Standardization and International Electrotechnical Commission (ISO/IEC) 42001:2023, the first certifiable AI management system standard, offers a familiar plan-do-check-act (PDCA) structure that can operationalize those legal requirements. Together, the EU AI Act defines what must be achieved and ISO/IEC 42001 describes how to run, evidence, and continuously improve an AI governance program.1 Organizations that implement this dual approach will be empowered to effectively navigate the complexities of AI compliance and governance.
Recent Industry Developments
Major technology organizations are actively responding to the EU AI Act’s Code of Practice requirements, with early compliance challenges emerging around documentation standards and governance attestation.2 The European Commission continues to provide regulatory guidance updates, while industry associations develop implementation guidance for their members.
Two developments shape near-term priorities. First, the EU AI Act’s initial obligations already apply: The earliest wave (including bans on certain unacceptable risk) took effect in February 2025. Second, GPAI rules start to apply from 2 August 2025, with the EU Commission-endorsed Code of Practice offered as a voluntary, near-term route to meet transparency and related duties while harmonized standards mature. Providers releasing new GPAI models after that date must comply immediately, while systems already on the market receive a longer runway. For deployers integrating GPAI, transparency and documentation expectations also rise.3
ISO/IEC 42001 meets this moment because it aligns AI governance with other management systems (e.g., ISO/IEC 27001), making it easier to fold new AI controls, metrics, and reviews into existing audit cadences and evidence repositories.4
As displayed in figure 1, the EU AI Act is the rulebook and ISO/IEC 42001 is the operating system that makes compliance repeatable and auditable.
Figure 1—EU AI Act and ISO/IEC 42001 Breakdown
| Guidance | Objective |
|---|---|
|
EU AI Act |
Establishes obligations by role and risk. The act assigns duties to providers, deployers, importers, and distributors. Obligations vary by risk class (unacceptable, high, limited, minimal) and by role. Many high-risk systems, presented in Annex III of the EU AI Act, can adhere to an internal-control conformity assessment, which places a premium on disciplined documentation, testing, and post-market monitoring.5 |
|
ISO/IEC 42001 |
Functions as an AI management system (AIMS). The standard sets requirements to establish, implement, maintain, and improve an AIMS. It emphasizes policy and scope, leadership and roles, risk methodology, operations, performance evaluation, and continual improvement—using the same Annex SL backbone that organizations already employ for security or quality assurance.6 |
Mapping Duties to Operations
Organizations can translate the EU AI Act’s requirements into tractable work by anchoring them to key elements of an ISO/IEC 42001–based AIMS. There are several elements that organizations should pay attention to:
- Risk management for high-risk systems—Requires life cycle controls proportionate to system purpose and hazards. Organizations can address this risk through risk methodologies that define acceptance criteria, assign roles, and establish review processes with corrective action. Documentation includes risk register entries linked to design artifacts and management reviews.7
- Data governance and quality—Focuses on training dataset suitability, integrity, and representativeness with proper documentation. Organizations should implement data life cycle policies with supplier controls, sampling plans, and lineage tracking. Supporting records include data source inventories, quality results, lineage diagrams, and usage approvals.8
- Technical documentation and logging—Requires records sufficient for regulatory assessment and appropriate logging systems. Organizations should use controlled documentation procedures with versioned model cards and risk-based logging retention. Organizations should also maintain design history files, release notes, log-retention proofs, and trace matrices.9
- Transparency and human oversight—Demands clear usage instructions and human-oversight mechanisms to prevent risk. Organizations must establish oversight roles, operator training programs, and tested runbooks before deployment. Documentation includes user guidance, oversight playbooks, training logs, and override scenario test results.10
- Robustness, accuracy, and cybersecurity—Emphasizes performance and resilience proportionate to risk with vulnerability handling. Organizations must implement secure development standards, adversarial testing, red team assessments, vulnerability service level agreements (SLAs), and incident response procedures. Documentation includes pen test reports, remediation timelines, incident records, and corrective action closures.11
- Post-market monitoring and incident reporting—Requires surveillance of deployed systems with notification provided for serious incidents. Organizations should develop monitoring plans with thresholds, escalation roles, and management review processes. Maintaining monitoring dashboards, incident submissions, and corrective-action logs are also a part of this process.12
- GPAI governance and the Code of Practice—Creates compliance timelines with transparency and safety duties for GPAI. The Code of Practice offers a voluntary path for demonstrating compliance with Articles 50–55 of the EU AI Act while standards finalize. Organizations should implement policies for GPAI provider/deployer roles, disclosures, and security documentation with management attestation. Documentation includes GPAI disclosure packages covering purpose, limitations, dataset statements, security notes, attestations, and review minutes.13
- Conformity assessment for high-risk AI systems—Involves Conformity Europeenne ()-marking through established conformity procedures for products sold within the European Economic Area (EEA). Depending on the applicable conformity assessment category (i.e., the specific assessment route designated for the system) organizations may be permitted to perform internal conformity evaluations without the involvement of a notified body. To support this process, organizations should maintain a unified conformity file that clearly maps each AI system requirements to corresponding policies, tests, and ongoing monitoring outputs. Supporting documentation includes requirement-to-test matrices, conformity declarations, and change logs.14
Once an organization maps these duties to daily operations, the program becomes a repeatable routine rather than a document scramble. Each high-risk system maintains a one-page register of obligations, an evidence index, and a current conformity file. This turns audits into retrieval exercises. A practical cadence for organizations might involve quarterly risk reviews at 100% coverage, a record of full dataset lineage, serious incident reports drafted within 72 hours of detection, and log retention set for 180 to 365 days according to risk tier. The result is steadier releases, fewer late changes, and a portfolio that stays ready for inspection.
What Success Looks Like (for Boards, Auditors, and Engineers)
Effective implementation of an ISO/IEC 42001-based AI management system demonstrates 3 key characteristics across organizational levels. Organizations maintain clarity through current AI inventories with role mapping, one-page registers of obligations with owners and dates, and explicit lists of GPAI exposures and high-risk candidates.15 They ensure traceability by linking each obligation to specific processes and artifacts in the AIMS evidence index.16 For example, "Article 43 requirement X -> Test Y -> Evidence Z" in the conformity file.17 Enterprises also build resilience through defined monitoring thresholds that trigger corrective action, scheduled management reviews, and internal audits that verify operation and closure. ISO/IEC 42001's PDCA structure sustains this cadence across reporting cycles.18
The EU AI Act is the rulebook and ISO/IEC 42001 is the operating system that makes compliance repeatable and auditable.Challenges to Implementation and Practical Mitigations
Challenges are inevitable when organizations implement an ISO/IEC 42001–based AIMS and prepare for EU AI Act conformity. However, there are practical strategies organizations can use to mitigate these obstacles and establish a robust, resilient AIMS:
- Abolish paper-only governance—Policies without operating reviews, metrics, and often fail during audits. A robust AIMS should include performance evaluation and management review to maintain evidence and corrective-action discipline.19
- Clarify role ambiguity—Many enterprises act as both provider and deployer across different systems, and duties often differ. A maintained role map within the AI inventory helps to ensure obligations are addressed per use case.20
- Stop underestimating GPAI timelines—GPAI transparency and governance begin earlier than many high-risk duties; the Code of Practice offers near-term structure and documentation templates.21
- Contain evidence sprawl—Artifacts scattered across drives and wikis can hinder conformity and investigations. A single conformity-file structure tied to the AIMS index supports regulatory inquiries and CE-marking steps.22
- Engage in scoped logging—Logs without retention, privacy and access controls introduce risk. Logging scope and retention should align to both the EU AI Act expectations and AIMS controls.23
Conclusion
Regulators and the industry continue to shape the early phases of GPAI. The EU Commission’s publication of the GPAI Code of Practice (and subsequent endorsements) signals a preferred path for transparency and security documentation in today’s modern digital landscape. With compliance deadlines approaching rapidly, organizations must act now to establish governance frameworks. Organizations should consider pairing the EU AI Act, which sets the legal obligations and timelines, and ISO/IEC 42001, which supplies the operating framework and evidence loop required to meet them. Organizations that link the 2 can demonstrate a credible governance posture quickly: a clear inventory and role map, GPAI disclosures and oversight, a pilot conformity file for high-risk candidates, and an AIMS that measures, reviews, and improves. The pairing of the EU AI act and ISO/IEC 42001 has the potential to turn compliance from a scramble into a stable, auditable system—one that boards and auditors can understand and that engineering teams can operate.
Endnotes
1 Ammann, T.; Achnitz, F.; et al.; “Latest Wave of Obligations Under the EU AI Act Take Effect,” DLA Piper, 7 August 2025; European Commission, “EU Rules on General-Purpose AI Models Start to Apply, Bringing More Transparency, Safety and Accountability,” European Union, 1 August 2025; International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee on Information Technology (ISO/IEC JTC 1), ISO/IEC 42001:2023 Information technology, — Artificial intelligence — Management system, Edition 3, 2022
2 An Introduction to the Code of Practice for General-Purpose AI
3 Ammann, Achnitz, “Latest Wave of Obligations”; European Commission, “The General-Purpose AI Code"
4 ISO/IEC, ISO/IEC 42001:2023
5 EU Artificial Intelligence Act, “Annex III: High-Risk AI Systems Referred to in Article 6(2)”
6 ISO, “Management System Standards”
7 Ammann, Achnitz, “Latest Wave of Obligations”
8 Ammann, Achnitz, “Latest Wave of Obligations”
9 Ammann, Achnitz, “Latest Wave of Obligations”
10 Ammann, Achnitz, “Latest Wave of Obligations”
11 Ammann, Achnitz, “Latest Wave of Obligations”
12 Ammann, Achnitz, “Latest Wave of Obligations”
13 European Commission, “EU Rules on General-Purpose AI”; EU Artificial Intelligence Act, “Article 43: Conformity Assessment,” European Union; EU Artificial Intelligence Act, “Implementation Timeline”; Orru, M; “EU Lays Out AI Code of Practice to Guide Companies on Compliance,” The Wall Street Journal, 10 July 2025
14 EU Artificial Intelligence Act, “Article 43: Conformity Assessment”; EU Artificial Intelligence Act, “Implementation Timeline”
15 Ammann, Achnitz, “Latest Wave of Obligations”
16 The AIMS evidence index is a controlled register in the AIMS that links each requirement to its evidence location
17 EU Artificial Intelligence Act, “Article 43: Conformity Assessment”
18 EU Artificial Intelligence Act, “Article 43: Conformity Assessment”; EU Artificial Intelligence Act, “Implementation Timeline”
19 ISO/IEC, ISO/IEC 42001:2023
20 Ammann, Achnitz, “Latest Wave of Obligations”
21 European Commission, “EU Rules on General-Purpose AI”
22 EU Artificial Intelligence Act, “Article 43: Conformity Assessment”; EU Artificial Intelligence Act, “Implementation Timeline”
23 Ammann, Achnitz, “Latest Wave of Obligations”
Gnanendra Reddy, ISO/IEC 27001 Lead Auditor
Is a cybersecurity and DevSecOps architect with deep experience designing and operating governance, risk, and compliance programs on ServiceNow IRM. He has led mission-critical initiatives in DevOps, automation, policy, risk, audit, vendor risk, data privacy, and business continuity, and implements automated controls across Kubernetes and enterprise platforms. In DevOps, he builds secure continuous integration/continuous delivery (CI/CD) pipelines, infrastructure as code with Terraform, and software supply chain safeguards such as software bill of materials (SBOM) generation, static application security testing (SAST), dynamic application security testing (DAST), and container policy enforcement. He is a Kubestronaut in the Cloud Native Computing Foundation (CNCF) community, IEEE Senior Member, Sigma XI Full member, and ISACA® member.