



Companies are actively working on innovative use cases for AI systems by either building powerful applications or integrating AI capabilities with their existing processes to solve problems. Despite the rapid growth of this technology and its widespread use, there are not only questions about the accuracy and fairness of the system outputs but also serious concerns around data security, privacy and transparency.
Simultaneously, we are seeing a wave of regulations and standards that outlines rules and requirements for developers and deployers of AI systems to comply with in order to build trust and promote the responsible use of AI. It is important for companies to consider and implement applicable safeguards to address the unique risks of AI to make these systems more trustworthy.
The role of the board
AI safety is a cross-functional business challenge – inefficient system performance or security issues could lead to loss in revenue, penalties for non-compliance and reputational damage. The governance imperative starts with management commitment and tone at the top. The board needs to take an active role in setting the context and boundaries regarding AI usage. The board should proactively understand the risks pertaining to the use or development of AI systems and continue to review the adequacy and status of appropriate safeguards that reduce these risks to acceptable levels.
The role of the board includes (but is not limited to):
- Define AI objectives and usage principles, and approve policies that provide the boundaries and basic rules for operations.
- Establish a comprehensive risk assessment program and ensure it is consistently followed.
- Review metrics and results from various monitoring systems and compliance programs to evaluate the continued effectiveness of controls and safeguards that are implemented to mitigate identified risks.
Risk and impact assessment for AI
AI risk assessments should be done at two levels – organizational (macroscopic) and systemic (microscopic). The macro-level risks are pervasive, assessed at a company level and involve the C-suite. Organizational readiness for AI is evaluated and considers risk factors like the relevance and complexity of AI; availability of resources (people, compute and tooling); ability to implement strong data governance, and legal and compliance risks.
The impact of AI systems depends on the context in which it is implemented – its operating environment and how the outputs may be interpreted. Hence, the micro-level risks are assessed at a granular level, typically for every model or the use case as such. The applicability of the various AI principles to the specific use case must be considered and the impact of the system on individuals, groups of individuals, communities or organizations should be assessed.
To synthesize these risks to concepts that could be practically implemented, the critical risks at each phase of the AI system lifecycle can be segregated into three buckets: data management, model development lifecycle and operational and monitoring risks.
Key controls for data management
An AI system is only as effective as that of the datasets it is trained on. Risks around the accuracy and appropriateness of the training data (as it relates to the objective and use cases of the AI application) and the likelihood of vulnerabilities, such as data poisoning and leakage, should be considered.
Key controls that ensure the validity of data source, relevance and appropriateness of data types and categories used for training must be implemented. The most important control for data management is verifying the quality of training data. Quality refers to not only verifying the accuracy and integrity of the data through the ML pipelines but also the diversity of the datasets, such that it is inclusive and represents the system’s user base sufficiently.
Traditional cybersecurity controls like least privilege and encryption should also be implemented. Further, pertinent privacy-related risks and controls around appropriate disclosures, consent, data deletion/retention and inclusion/opting out of PII in the datasets should be accounted for.
A nuanced model development lifecycle for AI
The key concepts are similar to that in a SDLC process but are more nuanced and complex for AI. Key risks include incorrect design that may cause the system to not meet its intended objectives, insecure development – including model supply chain risks and insufficient testing – and validation prior to deployments. In addition to traditional SDLC controls like the approval of the system design and CI/CD pipeline scans, these AI systems require controls around model training and validation.
Depending on the requirements for the specific model or use case, the extent of model verification must be identified and documented. It should include AI-specific techniques like adversarial AI testing, prompt injection, fairness test, model robustness and red teaming exercises. Specific metrics that must be met for the system checks to pass and proceed for deployment must be defined.
An AI governance culture balancing safety and innovation
It is important to facilitate the observability of an AI system by enabling the appropriate logs at each layer. The AI system must be continuously monitored for various parameters related to the functioning of the algorithm to be able to identify and detect any errors, security or system issues. Key metrices like accuracy, precision, robustness and model drift should be tracked and the results are used to fine-tune and correct the models. This constant monitoring is crucial to identify how the model behaves for various inputs, model performance drifts, continued fairness of the model or any underlying cybersecurity issues.
The most important piece is to integrate these governance activities with the routine processes of an organization and focus on periodic risk assessment and implementation of key controls that address the unique risks of AI. It is critical to create a governance culture that balances AI safety and innovation.
Editor’s note: Varun will be sharing additional insights on this topic during his session, “Building Trust in AI: Key Risks and Controls Considerations,” at ISACA Conference North America, to take place 21-23 May in Orlando, Florida, USA, and virtually.