



Cyberrisk models have become indispensable for enterprises striving to assess, quantify and manage cybersecurity threats. Yet, for these models to be effective, they must be trusted. If stakeholders lack confidence in a model’s accuracy and reliability, they will be hesitant to integrate its findings into decision-making processes. But trust in a cyberrisk model is not a given—it must be earned.
Building and evaluating trust in cyberrisk models requires a structured approach that goes beyond just validating mathematical calculations. Enterprises must also assess the credibility of the model, the transparency of its data sources and its real-world applicability. To address this challenge, I have developed a multilayered framework for evaluating trust in cyberrisk models, rooted in the classical rhetorical principles of logos, ethos and pathos. This framework enables organizations to systematically assess whether a model is fit for purpose and fosters confidence in its outputs.
The foundation of my trust framework is based on Aristotle’s three modes of persuasion: logos, ethos and pathos. Logos ensures logical soundness, where the model must be built on well-documented methodologies and validated statistical techniques. Transparency in how the model operates is essential. Ethos accounts for the credibility of the model and its creators, influenced by their expertise, independent validations, and adherence to established standards. Pathos represents the emotional confidence and stakeholder buy-in, recognizing that trust also involves peer adoption and perceived reliability. Applying these rhetorical principles provides a structured way to communicate a model’s reliability and to address concerns that potential users may have.
To operationalize these principles, the framework is broken down into three core tiers: attributes, artifacts and evidence. Attributes define the fundamental elements that contribute to trust, including model transparency, data transparency, validation and empiricism. Model transparency requires clear documentation and alignment with industry standards, while data transparency ensures traceability of data sources and governance processes. Validation encompasses both internal and external assessments to verify accuracy, and empiricism refers to evidence of adoption by industry leaders and positive real-world performance.
Artifacts serve as tangible proof supporting each attribute. For example, standards and documentation underpin model transparency, clear data lineage and governance policies demonstrate data transparency, independent audits and test results validate the model, and case studies and endorsements provide empirical evidence. Gathering and analyzing these artifacts allows organizations to measure trustworthiness in a structured manner. Enterprises may also choose to develop a scoring system to quantify trust, evaluating factors such as adherence to best practices, external validation and demonstrated effectiveness.
Cyberrisk models influence high-stakes decisions, from cybersecurity investments to regulatory compliance and incident response. Ensuring that these models are built on a foundation of trust enhances their adoption and impact. By applying this multilayered framework, enterprises can systematically evaluate and validate their cyberrisk models, fostering confidence and improving their overall cybersecurity posture. Trust is not built overnight—it is earned through rigorous validation, transparency, and shared confidence among stakeholders. By leveraging logical soundness, credibility, and emotional investment, enterprises can ensure that their cyberrisk models are not just accepted, but embraced.
Editor’s note: For additional insights on this topic, see Jack Freund’s volume 2 ISACA Journal article, A Multilayered Framework for Assessing Trust in Cyberrisk Models.