It is 9:30 p.m. on Sunday—Mother’s Day. I am in my home office reformatting my laptop as a result of a mysterious Windows 10 EVENT_TRACING_FATAL_ERROR. As I sit at my desk playing Mahjong on my cell phone and cursing Bill Gates, I wait for Windows 10 to reload and check for updates. Thank goodness I keep all of my data on a separate hard disk. As I sigh with exasperation, my husband’s voice sounds from the other room as he suggests “Just restore it to the last point that worked.” Silence. “You do create restore points before you load updates, don’t you?” he asks, snickering. I growl under my breath and respond “No” in a tone that grudgingly implies that I did not and never have.
Oh, did I mention that I am a home-based worker? If I have technology issues, I am 1,900 miles away from my office, so I can’t just hop in the car and get somebody else to fix my problems.
By now you might be wondering why, as an IS auditor, do I not practice what I preach?
I know that my problem, if not caused by my own ignorance, was at least exacerbated by not following the best practice of creating a restoration point. If creating backups of data is a prerequisite for recovery,1 then the corresponding code and system configuration should also be required for successful recovery. However, lest you think I am a complete Luddite, please know that I do back up my confidential data to a separate hard disk not connected to the Internet and use a personal cloud as back up for non-confidential data. I also have a UPS, several extra modems and routers, and a backup laptop. In case my Internet goes down, I even have a nifty business resumption plan (e.g., go to Starbucks, enjoy a latte, and use their free Wi-Fi). Yet why, despite my education, certification and years of experience in IS auditing, do I place my systems at risk by employing some best practices while blatantly ignoring others?
Cost was obviously not a factor as creating a restore point is a built-in Windows OS function. Nor is lack of understanding the ramifications of failing to employ restoration points. As far as I can tell, my only excuse for failing to create a restoration point was my perception of the risk of OS failure being low compared to other types of risk, such as loss of connectivity or data loss.
An individual’s willingness to adopt or to reject an IT control is reliant not only upon the real security risk, but also the perceived risk.2 Perception plays a far more important role in decision making than we realize. This means that some people (and organizations) will accept the possibility that something might happen rather than use precious resources to implement controls to prevent it. This false optimism is simply human nature,3 and sometimes it is only after experiencing the pain of one’s actions (or lack thereof) that individuals and organizations change.
How can we, as CISAs, ensure our clients perceive the real risk? As IS auditors, it is important that we understand why our clients might be resistant to change and reluctant to employ controls. If we can relate to them, then perhaps we can more effectively communicate our recommendations. After all, isn’t auditing another method of education?
At the very least…I might start taking my own advice.
Editor’s note: The ISACA Now Blog section is celebrating Women in Technology Month throughout June by featuring female bloggers. If you are a female blogger and would like to contribute a blog, please contact us at [email protected].
1 ISACA, CISA Review Manual, USA, 2009
2 Huang, Ding Long, Pei-Luen Patrick Rau, Gavriel Salvendy, "Perception of information security," Behaviour & Information Technology 29 (3): 221-232, May 2010
3 University of Kansas, "People By Nature Are Universally Optimistic, Study Shows," Science Daily, 5 May 2009, www.sciencedaily.com/releases/2009/05/090524122539.htm