ISACA Journal
Volume 3, 2,018 


Protect, Detect and Correct Methodology to Mitigate Incidents: Insider Threats 

Max Alexander, CISM, CRISC, CISSP, DoD Cyber Crime Investigator 

In terms of cybersecurity, many organizations tend to worry about external threats such as hacking or distributed denial-of-service (DDoS) attacks. These, after all, are what news organizations are likely to broadcast the most and, through availability bias, are likely to be what many executives and administrators tend to focus on when implementing information security controls for their organizations. Unfortunately, external threats are not the only hazards organizations face, and the internal threat can be more common and the most catastrophic.

This article explores the insider threat by defining the problem and examining what makes an insider threat. The article then presents strategies to mitigate the threats posed by insiders. Finally, it looks at mitigation strategies in terms of people, processes and technologies, and how each of these pillars can be used to protect, detect and correct insider threat activities and ultimately safeguard an organization’s information.

The Insider Threat

While there is much focus on external threats such as hackers, according to a recent survey by the SANS Institute, insider threats cause the most damage to an organization’s security.1 As per the 2014 Breach Level Index report from Gemalto,2 insider threats are the second leading cause of data loss in general and, according to the Verizon “2018 Protected Health Information Data Breach Report” (PHIDBR) (figure 1), the number-one cause of protected health information (PHI) breaches.3 In the most glaring example, former US National Security Agency (NSA) contractor Edward Snowden caused exceptionally grave damage to US national security by releasing terabytes of data to WikiLeaks and, in turn, to the United States’ adversaries.4 More recently, another NSA contractor, Harold T. Martin III, was also accused of stealing terabytes of data, some of which related to the NSA’s most sensitive operational element, the Tailored Access Operations (TAO) group, and providing it to the hacking group the Shadow Brokers.5 Although the full extent of the damage caused by these individuals may never be known, a recent US House Intelligence Committee report labeled Snowden’s actions as causing “tremendous damage,” endangering the lives of US troops and providing information that terrorists and enemy adversaries could use to create defensive measures against the United States.6

Figure 1

Obviously, the actions of these two individuals were detrimental to national security and exposed vulnerabilities in the NSA’s information security. Fortunately, they were caught, but, unfortunately, not before they inflicted irrecoverable damage. Ideally, the US federal government would want to identify and stop insider threat activity before it causes damage.

The first step in mitigating these insider threat activities is defining who is or can be an insider threat. While there are numerous definitions of what constitutes an insider threat, many of them tend to vary regarding the insider’s intentions. Previously, most definitions focused on an insider acting with malice; however, these definitions leave out a significant portion of insider threat activity. Therefore, the recently updated definition by Daniel Costa from Carnegie Mellon University’s (Pittsburgh, Pennsylvania, USA) Software Engineering Institute is the most comprehensive. Costa’s definition states an insider threat is:

The potential for an individual who has or had authorized access to an organization’s assets to use their access, either maliciously or unintentionally, to act in a way that could negatively affect the organization.7

This definition paints with a broad brush and encompasses both intentional acts and unintentional acts. With this broader definition, administrators and executives can implement more effective countermeasures to mitigate these threats. Knowing that insider threats can be anyone with access to information, the next step for administrators is to determine what motivates them to act.


The motivations of insider threats are as varied as the threats. Unintentional insiders may not have malicious intent but may be more motivated out of a desire to be helpful or efficient. Conversely, malicious insiders are likely motivated by their own self-interests or by revenge.

Unintentional Insider Threats
Insiders can have myriad motivations that may cause them to act in a way that harms their organization’s information security posture. Unintentional insider threats, though not motivated by malice and in many cases occuring out of ignorance, are more likely to be driven by the desire to help.8 This innate desire to help can be leveraged by social engineers to infiltrate an information system with the purpose of exfiltrating data.

Social engineering, or the manipulation of human beings to achieve a goal or objective, is a common tactic for hackers, scam artists and would-be infiltrators. For instance, according to a report by PhishLabs,9 the number-one cyberattack vector is phishing emails. With phishing emails, users receive an email they believe to be legitimate and then interact with the email, infecting their system. In many cases, the emails may appear to come from friends, coworkers or superiors, and the employee is manipulated into opening the email in a desire to be helpful or to provide a response, often filled with proprietary information, to an unauthorized person.

Phishing is a technical form of manipulation, where an employee unintentionally allows an unauthorized user to have access to an information system or is tricked into providing sensitive information. There are also physical forms of manipulation, such as hackers arriving at a building with their hands full, causing an empathetic employee to behave helpfully by holding the door open, thereby giving the unauthorized individuals access to the facility or information the intruders are not permitted to access.

Moreover, employees are humans and, therefore, are prone to make mistakes. In some cases, an employee may inadvertently disclose classified data to unauthorized personnel or may mistakenly dispose of information in a way that makes it available to unauthorized persons through dumpster diving. In these cases, although the disclosure is unintentional, the impact may still be as grievous as the disclosures by malicious insiders Snowden and Martin.

Malicious Insider Threats
Regarding malicious insiders, their motivations are more sinister. Many US Central Intelligence Agency (CIA) case officers report that the motivations for espionage, which tend to mirror insider threat activity, can be described by the money, ideology, coercion and ego (MICE) acronym.10 Although these four motivations by no means encompass every possible motivation for espionage or insider threat activity, they do provide a good starting point for the conversation about what motivates malicious insiders.

In the case of Snowden, he claims to have had objections to the NSA’s expansive surveillance of Americans.11 This ideological objection to the NSA’s spying could have led Snowden to act in such a way as to expose what he may have thought to be illegal acts. However, the amount of data and the type of data Snowden released also indicate that his motivations were more than simply benevolence on his part. They suggest, in part, that ego may have also played a role in his activities. Since his revelation, he has not shied away from the media and seems to enjoy providing interviews and basking in the attention he has received as a modern-day Daniel Ellsberg, who, in the Vietnam era, released the Pentagon Papers.

Other insider threats may not arise from a sense of ideological generosity but may result from the insider’s decision to compromise the organization’s information security out of a desire for self-preservation or career advancement. In many cases, employees may steal data to sell to a competitor, damage the reputation of their previous employer or enhance their marketability. Unfortunately, this form of insider threat is more common than many executives and administrators realize.

Other factors that could contribute to an insider’s motivation to steal data or endanger the organization in some other way may be linked to an organization’s culture or the recent business climate of the organization. These events may affect the ego or stability of an employee and result in the employee seeking retribution. In the case of an organization undergoing massive layoffs or downsizing, these developments may motivate an employee to commit acts of violence, fraud or theft of intellectual property.

A study by security company Biscom revealed that 85 percent of employees take proprietary data with them when they leave a company to bring to another company.12 This staggering figure highlights that almost anyone is or can be an insider threat. This statistic—conjoined with Deloitte’s analysis that the cost of a data breach extends beyond the loss of data and encompasses legal fees, damage to reputation, loss of intellectual property, and the new research and development needed to innovate newer technologies to replace what was lost13—can easily become a devastating financial liability for an organization, a liability from which it may not recover.

Awareness of these motivations can help administrators recognize potential insider threat behavior. With awareness, organizations can also craft mitigation policies to help protect, detect and correct insider threat actions. The following section details how organizations can implement appropriate policies and put into place the right people and technologies to mitigate these threats.

People, Processes and Technology

The basis for information security management is to protect the confidentiality, integrity and availability (CIA triad) of an organization’s data.14 To this point, this article has focused on the confidentiality of data. However, an insider threat can also cause damage to an organization’s data integrity and availability, as an insider can manipulate the data to make the contents incorrect or damage information systems, making the data unavailable when needed. Any compromise to this CIA triad can adversely impact an organization. As such, any strategy an organization adopts should encompass measures to safeguard the entire information security spectrum.

Moreover, an organization’s security strategy should provide protection to information in its three states.15 Data can be at rest, such as when data are stored on a computer or server. Data can also be in a state of processing, such as when an application is using or retrieving the data. Data can also be in transit, such as being sent via email or downloaded from the server. Regardless of the state of the data, administrators should seek to protect data as they move from state to state.

Further, the security strategy policy should encompass three key elements to be successful: people, processes and technology. Without a combination of these core elements, any security policy will fall short of providing the desired outcome.

People are the backbone of any information security ecosystem. Regarding insider threats, people are probably more critical, as people are both the threat and part of the security strategy. Security begins with individual employees, as they are often the weakest link in any security program.16 Having well-trained employees who can recognize the behaviors and motivations discussed previously in this article can strengthen an agency’s security posture.

People are also necessary to monitor and respond to incidents created by insider threats. Without trained incident responders, the other key elements of processes and technology are meaningless. Having appropriate staff will enhance the effectiveness of controls designed for protection, detection and correction.

Of course, those control measures are heavily dependent on processes to guide the incident responders on what actions to perform. Researchers stress that processes are guided by policies, procedures, guidelines and work instructions.17 These documents should provide high-level instructions regarding the organization’s security policy; dictate how, when and by whom communication takes place with external agencies in the event of an incident; and outline standard operating procedures to be followed to protect, detect and correct incidents. The policies should also dictate what constitutes risky behavior and should seek to increase monitoring on those deemed to have a higher risk.

Two of the most basic processes an organization can adopt to ensure it hires and retains the best people are implementing rigorous pre-hire background checks and conducting periodic reexaminations of employees’ backgrounds. Background checks provide insight into past behavior and indications of trustworthiness. Two questions worth considering include, “How honest was an employee on the job application?” and “Is an employee or prospective employee experiencing financial problems that may make him or her more likely to commit fraud?” While background checks are not a panacea to prevent or predict all insider threat behavior, they do provide a solid and cost-effective foundation.

Having the right technology can serve as a force multiplier for the organization’s information security program. Although organizations do not always need the latest and greatest technology, they do require some tools to augment their employees and assist them with monitoring and responding to incidents. Tools assist with analysis and make managing large datasets across an entire enterprise more manageable. Without tools, it is hard for an organization to establish controls, which makes it difficult to protect information, detect when a problem arises and correct the problem, preventing further damage.

Protect, Detect and Correct

In selecting technological tools to mitigate the effects of a potential insider threat, administrators should first conduct a risk analysis and determine what information needs protecting, how much protecting it needs and for how long it needs protecting.18 They should also conduct a risk assessment to determine what vulnerabilities exist within the organization, such as employees providing their two-weeks notice, and determine what level of risk these vulnerabilities pose. Once administrators understand the risk their information security faces, they can then begin to implement countermeasures for those identified vulnerabilities to mitigate risk to an acceptable level.

Preventive Controls
The first countermeasure that administrators should employ are controls designed to protect information (see figure 2 for an example controls matrix). One of the most basic controls is both policy- and technology-based: role-based access control (RBAC). RBAC operates under the premise of least privilege, or providing access to only the information or systems that a person needs based on the individual’s position and need to know.19 RBAC inherently limits the distribution of information, reducing the chances of its unauthorized disclosure. Additionally, RBAC operates across all the various states of information, as long as people follow established processes and keep the information within controlled channels. Additionally, administrators can employ other technical controls, such as encryption, to protect information from unauthorized access.

Figure 2

Encryption is another basic control that administrators should use to protect the confidentiality of data. Encryption secures data by obfuscating plaintext data with unintelligible ciphertext data through the use of mathematical equations based on large prime numbers.20 When data are at rest or in transit, they can be encrypted, thus keeping the data confidential and available to only those authorized to view it.

Detective Controls
Unfortunately, not all protective measures for securing data are 100 percent effective. Hackers can obtain unauthorized access to the data or, in the case of insider threats, authorized persons can access the data and use them in an unauthorized manner, such as downloading the data to removable media before quitting the company. In these instances, it is important that an organization has controls to detect when unauthorized access or use has occurred.

To detect insider threat activity, administrators must rely on logs from various systems and witness devices, implying the existence of a technical solution for storing and reviewing security logs. Detection requires knowing what right looks like and realizing when behavior patterns do not fit within the standard deviation, which means having a trained staff to review the logs.21 It also means that there must be a policy that directs the security staff to review these logs with some frequency to determine that a problem exists.

While there are expensive tools specifically created to detect insider threat activity through an automated review and analysis of these logs, an organization does not necessarily require having these tools to monitor their employees. Nevertheless, an organization does benefit from having, at a minimum, a data and log aggregator, such as Splunk, to compile logs and find patterns of irregularity. It also needs technically savvy auditors to comb through the data and look for events that violate the organization’s security policies.

Corrective Controls
Automated tools such as data loss prevention can take action if they recognize a violation of an organization’s security policy and prevent the activity from continuing, such as stopping a massive download of files to a removable media device. In cases where prevention is not possible, once an auditor or a technical solution discovers potential insider threat activity, administrators should immediately seek to correct the problem. Corrective actions should also be based on policy, and there should be an established process for incident responders to follow.

Ultimately, the earlier the problem is detected, the faster responders can mitigate the potential impact. Corrective actions could be as simple as notifying employees that they are committing a violation or as extensive as notifying law enforcement and investigating the incident. Once the incident is corrected, operations should resume and again focus on the protection of information and the detection of incidents.


Insider threats pose a tremendous risk to an organization’s information security because, due to their nature, insiders “already have access to an organization’s most sensitive data.”22 The impact they cause can be detrimental to an organization because insiders violate the trust the organization places in them and can damage the organization financially, reputationally and legally. Due to the risk insiders pose, organizations must take steps to mitigate the impact of their adverse actions.

An organization’s security strategy should aim to protect the confidentiality, integrity and availability of its data. To protect this CIA triad, organizations should use an approach based on the three pillars of people, processes and technology. Supported by these pillars are the information security controls of protect, detect and correct. With the implementation of these controls, administrators should seek to protect data in all of their states, recognize when there may be a compromise and then strive to restore normal operations by minimizing the impact of the incident through corrective actions.


1 Cole, E.; “Defending Against the Wrong Enemy: 2017 SANS Insider Threat Survey,” SANS Institute, 2017,
2 Gemalto, “Gemalto Releases Findings of 2014 Breach Level Index,” 2015,
3 Verizon, “2018 Protected Health Information Data Breach Report,” USA, 2018,
4 Macaskill, E.; G. Dance; “NSA Files: Decoded, What the Revelations Mean for You,” The Guardian, 1 November 2013,
5 Shane, S.; M. Apuzzo; J. Becker; “Trove of Stolen Data Is Said to Include Top-Secret U.S. Hacking Tools,” The New York Times, 19 October 2016,
6 US House of Representatives Permanent Select Committee on Intelligence, “Review of the Unauthorized Disclosures of Former National Security Agency Contractor Edward Snowden,” USA, 15 September 2016,
7 Costa, D.; “CERT Definition of ‘Insider Threat’— Updated,” CERT, 2017,
8 Hadnagy, C.; Social Engineering: The Art of Human Hacking, Wiley Publishing Inc., USA, 2011
9 PhishLabs, “2016 Phishing Trends and Intelligence Report—Hacking the Human,” 2016,
10 Burkett, R.; “An Alternative Framework for Agent Recruitment: From MICE to RASCLS,” Naval Postgraduate School, 2013,
11 Op cit US House of Representatives Permanent Select Committee on Intelligence
12 Biscom, “Employee Departure Creates Gaping Security Hole, Says New Data,” 23 December 2015,
13 Mossburg, E.; J. D. Fancher; J. Gelinne; “The Hidden Costs of an IP Breach, Cyber Theft and the Loss of Intellectual Property,” Deloitte Review, iss. 19, 2016,
14 Conklin, W. A., G. White; C. Cothren; R. Davis; D. Williams; Principles of Computer Security, Fourth Edition, McGraw-Hill Education, USA, 2015
15 Harris, S.; F. Maymi; CISSP All-in-One Exam Guide, Seventh Edition, McGraw Hill Education, USA, 2016
16 Op cit Hadnagy
17 Op cit Conklin et al.
18 Op cit Harris and Maymi
19 Sizemore, J.; Information Management Manual: A Guide For Students and Practitioners, Valdosta State University, Georgia, USA, 2015
20 Ibid.
21 Lee, R.; “Finding Evil on Windows Systems—SANS DFIR Poster Release,” SANS DFIR, 26 March 2014,
22 Op cit Cole

Max Alexander, CISM, CRISC, CISSP, DoD Cyber Crime Investigator
Is a former intelligence officer with 18 years of national-level counterintelligence (CI), human intelligence (HUMINT) and signals intelligence collection (SIGINT) experience in both the physical and virtual environments. He has a breadth of operational experience ranging from the tactical, while assigned to US Army Special Forces, to the strategic, while assigned to the US National Security Agency (NSA) and the US National Counterintelligence Executive (NCIX). He currently serves as the director of cybersecurity for Aveshka Inc., where he provides subject matter expertise to the US Pentagon’s Joint Service Provider—Computer Incident Response Team (PENTCIRT) in digital forensics and cyberinsider threat investigations.


Add Comments

Recent Comments

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and from opinions endorsed by authors’ employers or the editors of the Journal. The ISACA Journal does not attest to the originality of authors’ content.