Human Risk Management: A Practical Approach

Human Risk Management: A Practical Approach
Author: Grant Hughes, CISA, CISM, CDPSE, CASP, CCSK, CCSP, CEH, CIH, CISSP, SSCP
Date Published: 11 September 2024
Read Time: 14 minutes
Related: Blueprint for Ransomware Defense
English

Bruce Schneier, an American cryptographer, computer security professional, privacy specialist, and writer, is well known for saying, “Amateurs hack systems; professionals hack people.”1 This statement encapsulates the reality of the digital landscape: Cybercriminals find it easier to manipulate humans than machines, so they increase their efforts to do so.

People interact with technology and data at various touch points, making them a crucial part of an enterprise’s primary defense strategy.2 Given cybercriminals’ increased efforts to manipulate humans through social engineering, people are more prone to make mistakes during these interactions. While there are several methods to mitigate or limit human errors, such as strong authentication, encryption, and data loss prevention, the most effective control is an informed and cybersavvy workforce.

Several reports suggest that human error is responsible for more than 90% of cyberbreaches. Thus, cybersecurity strategies should be based on the assumption that humans will continue to make mistakes, despite having successfully completed security awareness training.

Several reports suggest that human error is responsible for more than 90% of cyberbreaches.3 Thus, cybersecurity strategies should be based on the assumption that humans will continue to make mistakes, despite having successfully completed security awareness training.

Shifting From Security Aware to Security With Care

Awareness is facilitated through sharing knowledge and understanding; the primary objective of awareness is to educate people about specific topics. However, awareness does not guarantee behavior change. Consider the example of a speed limit: Most people are aware of what the speed limit is, but they speed anyway. Similar examples include smoking cigarettes or eating unhealthy food, despite the known health risk. In both cases, it is safe to assume that people are aware of the potential risk associated with their actions, but for a variety of reasons, they choose to do it anyway.

Perhaps shifting the focus from security awareness to proactive cybersecurity care could yield more effective outcomes.

Human-Centric Security

The following story highlights the power and importance of a human-centric design:

In 2007, Doug Deeds, a designer at General Electric, had just completed a two-and-a-half-year project developing a magnetic resonance imaging (MRI) scanner.4 When he got the opportunity to see it in action at a local hospital, he jumped at the chance. Standing next to the new machine, Doug admired his work. He saw a gleaming white machine in a sanitized room, and the sound it made was like a beautiful melody to him.

A short time later, the technician tapped Doug on the shoulder and asked him to step out, as a patient was coming in for a scan. As he stood outside the room, looking in through a window, Doug saw a little girl walk in, and he noticed that she was crying. Her father knelt and told her, “My girl, we talked about this, you need to be brave.” The technician then called for the anesthesiologist, as the girl needed to be sedated. Doug asked the technician how often this happened, and the technician said, “Oh, it is quite common. As much as 80% of all children must be sedated as they are simply too scared.”

Doug was heartbroken. Suddenly, he saw the MRI scanner and the room that housed it from the little girl’s perspective. He saw the yellow and black lines on the floor, indicating where people were allowed to walk, and it looked like an accident scene. On the wall was a huge magnet with a danger sign on it, and the machine now sounded like a monster to him.

Upon reflection, Doug realized he needed to adopt a human-centric design approach. Instead of lines, stickers that resembled rocks were placed on the floor, and the children were told to walk on the “rocks” as they entered. Waterfall sounds were played in the room, and the MRI platform was turned into a canoe. The children were told that if they stayed very still, a fish might jump out of the water. This redesign completely transformed the entire experience. As a result, now less than 20% of children must be sedated, and satisfaction scores went up to 90%.5

This story and its lesson can be applied to securing systems and data. System designers and security practitioners always aim to do things faster, cheaper, and better. Customers or users of the system, however, focus on the experience. If the user perceives security controls or processes to be onerous or unnecessary, they will try to circumvent those controls. For this reason, a human-centric approach to cybersecurity is imperative.

Social Power and How It Is Exploited

Social power—the degree of influence an individual or enterprise has over others—is frequently used by cybercriminals. They use social power to trigger certain responses, allowing them to manipulate their victims and make them do things they would not normally do. French and Raven’s Five Forms of Power can be applied in the context of human cybersecurity:6

  1. Reward power—Reward power occurs when one person can influence another by providing a positive outcome or reward. For example, an employer has reward power over an employee because the employer can increase the employee’s salary. Cybercriminals commonly use reward power in scams. Victims are convinced to do something that will be rewarded. For example, a victim may be asked to click on a link and fill in personal details to receive an Apple watch.
  2. Coercive power—Coercive power is the opposite of reward power. It is based on the ability to cause negative outcomes for people, and it uses fear or punishment to manipulate them. Employers have coercive power over employees because they can reduce salaries or demote or fire employees. Cybercriminals use coercive power by impersonating chief executive officers (CEOs) or other senior managers to induce stress and create a sense of urgency, prompting victims to do things they normally would not.
  3. Legitimate power—Legitimate power influences people to perform actions at the direction of those in positions of power, such as management or law enforcement. Cybercriminals may exploit this by masquerading as the CEO and requesting sensitive information from employees. A cybercriminal could also pretend to be a law enforcement officer or regulator and request sensitive information or payments from unsuspecting victims.
  4. Referent power—Referent power relies on admiration or respect to exert influence. Cybercriminals exploit this power by impersonating a celebrity, a sports star, or anyone else the victim holds in high regard. For example, someone may impersonate a celebrity on social media and ask followers to donate money or click on a link to register for a ticket to a concert or a sports event.
  5. Expert power—Expert power represents informational influence, as experts are generally perceived to be especially knowledgeable about certain subjects. Cybercriminals use expert power by impersonating industry experts.

Cybercriminals often use a combination of these social powers to trick unsuspecting victims into making mistakes or taking actions they would not normally take.

Attack Techniques Exploiting Human Vulnerabilities

Social engineering is the process of manipulating people into doing whatever the cybercriminal instructs them to do. The objective may be to get people to share confidential information, such as passwords or other personal data, or to subvert security controls by clicking on a link or installing malicious software. Cybercriminals utilize various social engineering techniques:7

  • Phishing—Phishing is an attempt to trick individuals into revealing sensitive information by posing as a trusted entity. Historically, phishing was facilitated primarily via email, but it is now increasingly conducted through SMS texting (smishing) and over the telephone (vishing). Whatever the method, the objective remains the same: Cybercriminals impersonate trusted entities to manipulate victims into providing sensitive data or clicking on a malicious link. The two most noteworthy types of phishing are spear phishing and whaling. Spear phishing includes elements of expert power or referent power to convince victims that the requests are legitimate. The victim is often researched ahead of time and specifically targeted. Whaling targets high-value victims such as senior executives or users with escalated privileges.
  • Baiting—Baiting uses reward power to trick victims. It involves the promise of free items, such as vouchers or downloads, to entice victims.
  • Tailgating—Tailgating, also known as piggybacking, involves an unauthorized person following an authorized person into a restricted area.
  • Shoulder surfing—As the name suggests, shoulder surfing is a social engineering technique that involves looking over someone’s shoulder to gain access to restricted information.
  • Dumpster diving—Dumpster diving is the process of browsing through an individual’s or enterprise’s trash to retrieve information that could be used in a cyberattack. •
  • Impersonation attack—With this targeted phishing attack, a cybercriminal pretends to be someone else in an attempt to steal sensitive data from the unsuspecting victim using social engineering techniques.
  • USB drop attack—In this type of attack, malicious USB drives are given enticing labels such as “Confidential” or “Salaries 2024.” The USB drives are then strategically placed in locations where individuals are likely to find them and pick them up. When plugged into an enterprise’s computer, these USB devices can automatically install malware or steal sensitive information.

Security Architecture Principles

A robust security architecture should ensure that the entire system remains secure, even if one component of the security architecture fails. The vulnerability of the entire system, often compromised by the lack of proper security awareness and training, highlights the underlying problem: weak security architecture. Security practitioners should develop systems based on the assumption that humans will make mistakes, and the following security architecture principles should be evaluated and adopted where appropriate:

  1. Defense in depth (DiD)—The security of the entire system should not be dependent on any one component. For example, if a user’s password is compromised, another security control such as multifactor authentication (MFA) should prevent unauthorized access to sensitive data.
  2. Design for information security—The confidentiality, integrity, and availability (CIA) of information must be considered when designing a system, not treated as an afterthought. It is often easier and more cost effective to build security into the system upfront.
  3. Design for least privilege—Users, devices, and any other consumers of resources and data must operate using the fewest privileges necessary to carry out their job function. For example, if a user only needs to view reports on a system, that individual should not have permission to make changes to the report as well.
  4. Separation of duties—Certain duties or access rights should be distributed to users under different reporting lines to ensure that no user has a conflict of interest. For example, the person responsible for selecting a vendor and placing an order should not also be responsible for approving the payment.
  5. Risk-based protection—Security controls should be applied to systems following a risk-based approach. For example, if a system contains no confidential or sensitive data, encryption should not be a mandatory control, as it comes at a cost.

Human Risk Management Framework

To strengthen the human firewall and reduce risk, the following framework is proposed (figure 1):

  1. Promote good governance. Define a security awareness standard that includes the scope, objectives, and key performance indicators (KPIs) of the program.
  2. Garner support from senior leadership. Senior leaders must regularly stress the importance of cybersecurity. This can be accomplished via email, town hall meetings, or posters. Executive-level support is likely to facilitate cultural change within the enterprise.
  3. Segment staff based on risk. Board members and their assistants pose a different type of risk than those connected to the service desk. Segmenting users by risk allows messages (and their frequency) to be tailored to the user group. In addition, assessments can be tailored to simulate real and relevant risk.
  4. Establish a champion program. A cybersecurity champion program allows a group of users embedded in the enterprise to drive the security message. These users champion security from the front lines
  5. Encourage users to report cyberincidents. The only thing worse than an employee making a mistake is an employee concealing a mistake. Creating an organizational culture that encourages people to report mistakes could be the difference between containing a cyberincident and not being able to.
  6. Tailor cybersecurity awareness by department. Human resources (HR) and payroll employees must be aware of impersonation attacks, such as changes in employees’ banking details for the direct deposit of paychecks, while the help desk must be aware of tactics to maliciously reset user passwords. For a cybersecurity awareness program to be effective, it must have a combination of general awareness content and tailored content specific to each department’s business processes. 
  7. Use different mediums. People learn differently and possess varying levels of awareness and education. It is important to understand the audience, their level of cyberawareness, and their preferred learning methods, such as in-person sessions, online materials, or self-study. The awareness program must include various methods of delivering content.
  8. Incorporate stories into the program. Stories (i.e., anecdotes, documented examples) are easy to remember, and when used correctly can be a powerful component of a security awareness and culture strategy. Facts and statistics pale in comparison to a powerful story with relatable key points.
  9. Regularly test effectiveness. This testing is often accomplished with phishing simulations, but results should be interpreted with caution. Users should not be humiliated if they are caught falling for a phishing simulation. There should be a documented policy for dealing with repeat offenders (users who consistently fail phishing simulations). The policy must be fair, risk-based, communicated to all employees up front, and consistently enforced.

People: The Most Targeted Link in Cybersecurity

As the adoption of technical controls such as MFA makes cybercriminals’ efforts more challenging, social engineering may seem to offer an easier point of entry. For this reason, individuals are constantly being targeted. Compounding the problem, a 2023 report by KnowBe4 found that one out of three employees is likely to click on a suspicious link or email or to comply with a fraudulent request.8

Stress, multitasking, and distractions are some of the main reasons users fall for social engineering scams, in addition to a lack of awareness. A user might be on a Zoom call while responding to emails at the same time. Split focus increases the chances of human error. Therefore, the link between employee wellness and cybersecurity awareness must be explored; mindful, relaxed, and calm employees are more likely to spot a social engineering attack.9

When adequately enabled, end users can be an extension of the cybersecurity team. Security teams must therefore equip users with the tools to detect and report suspicious behavior and create a culture in which acknowledging mistakes is not met with judgment, humiliation, or unjustified punishment. 

Conclusion

Human beings have cognitive limitations. Knowledge is not inborn; humans must be taught. For example, if no one informed the public that smoking is unhealthy, no one would know. But once certain information becomes widespread, it seems obvious and more like common sense. Herein lies the challenge: What is common sense to one person might not be common sense to the next person. Consider someone who has worked at sea all his life. The things he considers common sense would probably not be common sense for people with little experience at sea. The same is true of cybersecurity.

Human error may be responsible for 90% of data breaches, but this does not mean that end users are the weakest link. However, it highlights the fact that human beings are an important and often targeted link in the security chain.

Firewalls are effective only when they are configured correctly, running the latest firmware and, depending on the type of firewall, the latest signature files. In many ways, a human firewall is the same; it needs to be aware of the latest scams and social engineering techniques and know how to report them. Only when people are informed and empowered can they be an effective extension of the cybersecurity team.

Endnotes

1 Wittkop, J.; “The People Problem,” Building a Comprehensive IT Security Program, Apress, USA, 2016, https://doi.org/10.1007/978-1-4842-2053-5_6
2 Zongo, P.; The Five Anchors of Cyber Resilience: Why Some Enterprises Are Hacked Into Bankruptcy While Others Easily Bounce Back, Broadcast Books, Australia, 2018
3 World Economic Forum, The Global Risks Report 2022, 2022, https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2022.pdf
4 IDEO U, “The Journey From Design Thinking to Creative Confidence,” https://www.ideou.com/blogs/inspiration/from-design-thinking-to-creative-confidence
5 IDEO U, “Journey From Design Thinking to Creative Confidence”
6 Mind Tools, “French and Raven’s Five Forms of Power: Understanding Where Power Comes From in the Workplace,” https://www.mindtools.com/abwzix3/french-and-ravens-five-forms-of-power
7 Lenaerts-Bergmans, B.; “10 Types of Social Engineering Attacks and How to Prevent Them,” CrowdStrike, November 2023, https://www.crowdstrike.com/cybersecurity-101/types-of-social-engineering-attacks/
8 KnowBe4, Survey Report: 2023 African Cybersecurity and Awareness Report, 2023, https://info.knowbe4.com/research-2023-african-cybersecurity-awareness-report
9 Collard, A.; “Mindfulness in Cybersecurity Culture,” annacollard.com, 22 February 2023, https://www.annacollard.com/post/mindfulness-in-cybersecurity-culture

GRANT HUGHES | CISA, CISM, CDPSE, CASP, CCSK, CCSP, CEH, CIH, CISSP, SSCP

Is a head of cybersecurity at Engen Petroleum Ltd. He is a strategic thinker, thought leader, and public speaker with a background in security strategy, architecture, cybersecurity risk, and security operations. Hughes has delivered multiple keynotes and has more than 14 years of experience in IT, eight of which have been spent in information and cybersecurity. He is a trusted advisor certified in many disciplines within the information security domain. He can be contacted on LinkedIn.