For the past year, I have been in many professional circles where the sentiment around AI was to see how the regulation settles before committing to a governance framework. Well, the wait is over. The EU AI Act is no longer a coming-soon teaser. It is now the law of the land. If you think this is just another GDPR-style paperwork exercise, you are in for a surprise.
It’s Not About the Tech: It’s About the Use Case
One of the biggest misconceptions I see is leaders asking if their AI is legal. That is the wrong question because the Act does not care about your code as much as it cares about your context. The risk-based hierarchy is the heart of this regulation.
Most of us will live in the High Risk zone. If your AI is making decisions about who gets hired, who gets a loan, or how critical infrastructure is managed, you are now under a legal microscope. You are looking at mandatory data governance, rigorous technical documentation and meaningful human oversight.
Where Most Organizations Will Trip Up
In my deep dive into the Act, three specific areas stand out as potential landmines for even the most mature teams:
The Data Quality Trap: Article 10 demands that training data be relevant, representative and, to the best extent possible, error-free. Anyone who has ever worked with a real-world dataset knows that error-free is a massive mountain to climb.
The Supply Chain Blind Spot: Many professionals believe they are safe because they only use a third-party tool. However, if you modify that tool or put your brand on it, you might be legally classified as a Provider. This means you inherit the full weight of the liability.
Human in the Loop vs. Human on the Paper: The Act requires that the person overseeing the AI actually understands the output and can challenge it. You cannot simply have a junior staffer clicking an approve button all day.
The Silver Lining: Trust is the New Currency
The potential fines are significant, with penalties reaching up to 7 percent of global annual turnover. However, I view this Act as a gift to the industry. In a world where everyone is skeptical of black box algorithms, being the company that can demonstrate a conformity assessment is a massive competitive advantage. Early adopters are not just avoiding fines. They are building a brand centered on trustworthy AI.
The Bottom Line
The EU AI Act is a roadmap rather than a roadblock. As ISACA professionals, we are the ones who have to translate this legal jargon into technical reality. My advice is to not wait for the first round of enforcement. Start your gap analysis today.
About the author: Ali Nouman, MSc Cybersecurity, CISA, CISM, CISSP, CDPSE, TRAP, PMP, ITIL, is an award-winning cybersecurity professional with 18 years of experience, specializing in the intersection of AI governance and risk management. His full white paper on this topic is published in the ISACA Engage Emerging Technology Library, contributing to the global discourse on emerging technology risks and governance.