Your neighborhood zoo decides to conduct a risk assessment of its operations based on the threat to human life and safety of animals. The following is a snapshot of the assessment:
- The inherent risk level of a tiger is high, and the control value of a cage is moderate, so the final risk value of a caged tiger is moderate-high.
- Using the same method, the risk value for a caged hyena is moderate. However, the hyena’s cage is in a high traffic area because it is near the panda, which is the zoo’s most popular attraction, so the risk value of the hyena is also raised to moderate-high.
- The panda has a low risk value, but there are fears of people trying to enter its enclosure and the panda escaping. Since it also attracts the most traffic, the risk value of the caged panda is also raised to moderate-high.
The net result is whether it is a caged tiger, hyena or panda, all are at a moderate-high risk in the zoo’s risk assessment. Given that outcome, zoo management decides to invest in the procurement of more tranquilizer dart guns rather than focusing on security of individual animals. An unfortunate outcome of this decision is that, because its cage did not have a moat around it, the tiger manages to escape from the zoo and create chaos in the neighborhood. Since this happened outside the zoo’s normal operating hours when most of the guards were not around, the selected control failed to mitigate the threat.
If the zoo scenario seems familiar, it is because such risk registers, based on simple addition/multiplication operations to assess risk, are deeply intertwined in the ecosystem of information security. The outcome of such risk assessments is always the result of the intuition of the assessor rather than any actual risk measurement. Security professionals are hesitant to use quantitative methods because of the following common misconceptions:
- True impact of a data breach cannot be measured
- Lack of data to assign probabilities
- Cyber security is too complex to model quantitatively
- Quantitative methods do not apply when humans are involved
We have to ask ourselves exactly how the existing risk matrices and risk scores alleviate these issues. Are they really helping us to make decisions?
The answer is that quantitative, probabilistic methods must be used specifically because of lack of perfect information, not in spite of it. If perfect information was available, probabilistic models would not be required at all.
A very common question that security leaders are being asked is, “What is the probability of someone hacking into our network?” An immediate response would be that it is neither 0% (impossible) nor 100% (certain). It is somewhere between .0001 and .9999. Can we do appreciably better than that?
The model proposed in my recent ISACA Journal article can help determine the likelihood of an attack. Advantages of using this model include:
- Not just a probability of loss, but an actual financial loss amount can be calculated
- The model can be used to make a decision rather than justify a decision. By expanding the model, a chief information security officer (CISO) can decide (and explain to management) whether it is more important to spend on the log correlation tool or system hardening controls, depending on which one will reduce the loss value more.
The Bayes Network (BN) based model used in my article has multiple advantages that are particularly well suited to the problems security professionals face. Since BN exploits experts and prior knowledge, it can make inferences from very little data. These inferences may result in reductions in the cyber security expert’s uncertainty and have a big impact on risk treatment decisions.
Read Venkatasubramanian Ramakrishnan’s recent ISACA Journal article:
“Cyberrisk Assessment Using Bayesian Networks,” ISACA Journal, volume 6, 2016.