JOnline: Identity Mining and Insider Threat Monitoring 

Download Article

Despite the fact that enterprises quite rightly develop controls and prevention techniques to combat cyberattacks, threats from users within the corporate network pose a significant risk to information assets. Existing users with accounts, permissions and access required to perform their jobs are increasingly becoming a major risk to information security through account misuse, data loss and fraudulent activities. This article reviews the definition of an insider threat and its impact, and provides an overview of the techniques to control and remediate these threats.

A major driver for insider threats is linked to the motive and intent of an employee to perform a malicious activity, for either financial gain or personal satisfaction. Intent and capability from misaligned user access is a toxic combination, with recovery costs often in excess of several million dollars, as seen by the UBS PaineWebber logic bomb attack in 2002.1

An insider threat is no longer associated with privileged account management or operational superusers. Engineering, development and system administrators are often associated with having the ability to perform data processing activities and execute transactions that few others are permitted to perform. Threats are also associated with general users, due to a lack of clearly defined controls and policies that help delineate separation of duties, data-in-transit activities and access-deprovision processes.

The risk associated with data loss, for example, does not always coincide with a malicious activity. Emailing of a confidential project file to a contractor or using a home laptop to access the corporate VPN are basic examples in which an internal user, through either ignorance of an existing policy or lack of a policy entirely, could compromise information assets. The rise of Bring Your Own Device (BYOD), or simply the introduction of corporate-managed smartphones, has also opened up another attack vector, in which sensitive emails and corporate information can be lost either via negligence or device theft or interference.

The Rise, Reality and Cost

Nearly half of all corporate data breaches can be attributed to employees.2 While each enterprise has different infrastructure, personnel and business processes, this is a staggering percentage attributable to employees who are responsible to develop, protect and execute business processes that should aid the growth of the enterprise.

Insider threat increases as an enterprise’s infrastructure and complexity of interaction increase. In today’s ever-connected work and social landscape, the risk of information dissemination is as high as ever.

The rise of home-based working and BYOD has not only increased flexibility and, arguably, efficiency, it has also opened up an avenue of information flow that, if not well managed and maintained, can be a great liability. The main area of concern is the general employee who has a basic level of IT understanding, and also has a limited grasp of IT and information security. This lack of understanding is likely to cover the vast majority of employees and can put information at risk in the simplest way, e.g., a home worker who allows family members to use a work laptop or the company mobile to download an untrusted app.

The rise of BYOD also opens up the opportunity for users at the opposite end of the technical spectrum to become an information liability. The ‘techno geek’ who has an increased basic knowledge to attach multiple devices to his/her corporate workstation instantly expands the private network to devices that may not map to internal policy or have been reconfigured by their owner.

The type, nature and impact of data breaches vary accordingly. A mistyped email address sending a project plan to an incorrect recipient does not have the same economic or public impact to an enterprise as would a disgruntled employee leaving a logic bomb on a UNIX server estate.3

On average, an insider crime takes up to 42 days more time to resolve than an external cyberattack, with the cost of repair to the enterprise coming in at US $18,000 per day.4

The tangible recovery costs are often the simplest items to calculate. The additional support costs from vendors, consultancy partners and internal business units that are required to help resolve, secure and document insider-related data breaches are considerable. Cost analysis should focus beyond the monetary, considering, for example, the time and effort of all involved, including senior management.

Data breaches of any kind—internal or external—can also create great damage for an enterprise’s public brand. With the rise of social media and news proliferation, an insider breach can instantly devalue an enterprise’s image as a well-managed and secure business in control of its information assets and customer-related data.

Before Protecting, Identify Malicious Activities

The true impact of insider threats is largely unknown. This is mainly due to an enterprise’s inability to identify, track, report and remedy the data breaches that occur within the corporate landscape.

The following are ways to identify data breaches:

  1. Identify the malicious activity—Identifying malicious or non-malicious activity that could cause data loss or a fraudulent activity requires significant understanding of the underlying data objects, infrastructure and business processes.
  2. Input a data classification policy—A data classification policy requiring the classification of key information assets is key to fully understanding the scope of the threat landscape. The classification could include the type, value and owner attributes for the main information storage objects. Information should not be focused just on physical files, but the combination of information held on different media, different contextual situations and its use to different people.
  3. Establish a risk management framework—A basic risk management framework can provide the main building blocks to help identify the type, value and vulnerability of information assets. This could include business-critical human resources (HR) and patent information residing in a data centre, as well as what data can be allowed on portable universal serial bus (USB) devices. The policies that drive those decision points can be created only after a thorough data classification process has taken place.
  4. Document access control processes—Once information has been classified and correctly stored, access control to that information should be implemented. Access control mechanisms are now intrinsic to all key applications and operating systems.

    Models based on mandatory access control or role-based access control frameworks are common and provide a way of restricting users to only the key information required to perform their job. A policy of least-privileged access or access on a need-to-know basis should also be implemented.

    Privileged accounts—those used by administrators and services that have extended capabilities—should be carefully managed with regular certifications, documented account owners and event monitoring. Once a well-documented access control process is in place, generally containing access request processes, automatic provisioning of permissions, approvals, certifications and auditing, it becomes easier to monitor deviations from the correct level of assigned access. Basic security policies should also be in place to cover things such as data-in-transit allowances—what data can be moved, by whom and to where. The same applies to data in situ (i.e., where the data can be stored, whether they, for example, require encryption or back up). Policies or thresholds for key data processing transactions and account activities make it easier to track erroneous behaviour.
  5. Use data-loss-prevention techniques—Data-loss-prevention techniques should focus on securing the most critical data within the organisation, as identified in the classification process. Data held by enterprise resource planning (ERP) and HR systems, for example, generally receive regular audit reviews of main access control lists, with redundant users being removed. Resting data should be encrypted either at the file-system level or—this is increasingly common—at the full-disk and storage area network (SAN) level using microcontrollers. Encryption of internal-networked data is important to overcome the ever-increasing number of cyberattacks that use a legitimate employee account as an attack vector.

    Many organisations are complex, commonly using data sharing and collaborative work environments. Critical data-in-motion (DiM) protection should focus on analysing network packet streams to check if sensitive data are being transported beyond the egress point on the corporate network. Event monitoring and logging should be enabled and tracked for all key databases, applications, directories and operating systems that have a high business value and in which an insider attack or fraudulent activity could result in significant business damage.

The recording of all low-level transactions helps to correlate real-time activity with access control management policies implemented at a higher business level. Nearly all computational systems can produce a verbose output of activity, which can be centralised into a security information and event monitoring (SIEM) solution.

Figure 1SIEM solutions generally have basic features that allow for centralised collection, correlation and viewing of disparate log data (see figure 1). Managing log data at an individual application level can be time-consuming and error-prone, often causing administrators to miss key events with false-positive alerts or alerts that do not provide a full view of the threat.

Management of the transaction data should require the use of profilers and behavioural analytics to help define where alerts should be raised and where security administrators should focus their attention.

One way to help identify exceptions and high-risk alerts is to analyse the historical data of a key application or user in order to describe a baseline usage graphic. This helps identify spikes in activity, such as excessive failed login attempts, multiple Internet Protocol (IP) addresses using the same account, transactions being performed after hours or new transactions that were previously unknown. Comparing current activity to an average historical account can provide a greater understanding of whether a transaction is normal/typical or potentially a threat.

This approach does have the drawback of requiring historical data, which may not always be available. To counteract this, peer-group analysis can be used. Peer-group analysis helps remove the risk associated with tracking fraudulent activity of a known malicious user who gradually alters his/her behaviour to avoid detection, an approach that is similar in design to a salami attack. A salami attack is often associated with financial fraud, in which the perpetrator performs a malicious activity using an amount just below an alerting threshold, but performs the activity multiple times to gain a greater profit or outcome.

Analysing a user’s activity in relation to peers can make it clearer as to which activities are typical across the group and which are irregular or being performed only by a single user. A peer group can consist of a relationship between users based on their job role, location or manager. This type of contextual relationship is based on the assumption that employees performing the same job role are generally using the same systems, have similar access rights and perform similar transactions.

Analysing activity in relation to other users helps develop a more contextual view of the user and reduces the risk of false-positive alerts. The peer-group approach should also be used to analyse the existing user to entitlement relationships. When attempting to keep a least-privileged approach to access control, users often gather entitlements as they move between job roles within an enterprise.

This privilege creep of permissions is a common occurrence and, even with the use of role-based access control, can result in users having considerably different entitlements than their colleagues.

The key to any activity and access analysis is to identify abnormalities quickly and with value. Any alert that is created must contain sufficient information for it to be recognised as a genuine threat and acted upon quickly with the correct level of resource.


An effective alert system is only part of the insider protection strategy. The logic behind how alerts are developed is a critical component, as it helps reduce the noise often associated with analysing terabytes of access and activity information.

Figure 2Once an alerting framework has been created using peer-group analysis or behavioural profiling, an escalation workflow needs to be put in place to help direct the alerts to the correct people quickly and effectively (see figure 2). By using a risk-management framework to tag alerts and assign the correct remediation and escalation processes, alerts can be assigned to the correct owners—those who have the responsibility and knowledge to implement a recovery action.

Initial remediation of any alert should be focused on the most critical systems and the most high-risk users and transactions. These incidents are often associated with users with high levels of permissions, redundant users or shared accounts.

Cross-domain support is crucial to establish correct remediation processes to not only remove the initial security breach, but also prevent breaches of the same type from occurring in the future. Incident logging, auditing and reporting are key.

Future prevention techniques should include a signature-based policy that can help detect occurrences via historical data analysis, as well as preventive approaches using policy-based decision-management tools.

Risk-based policy engines that manage the authorisation and authentication processes should leverage the policies designed as a byproduct of the peer group analytics and behavioural profiling. This can help reduce future incidents before they occur.


Insider threat is a real and costly risk to the information assets of any enterprise. While cyberattacks are on the increase, becoming more sophisticated and organised, internal security policies and controls need to adapt to help identify, reduce and ultimately remove the threats posed by enterprises’ employees.

Either through malicious use or simple ignorance of existing security approaches, employees can carry a more significant risk than first thought, resulting in data breaches, customer record loss and brand damage. A multi-layered approach utilising effective security policy, access control, event log analytics and behavioural profiling can assist in identifying and managing insider threat and abnormal usage.

Classification of information assets provides a better approach to securing data in transit and data at rest. Egress end-point detection of sensitive data movements helps reduce potentially non-malicious but policy-violating incidents, such as information being sent via attachment outside of the corporate network. Encryption of data at rest, even within the internal network, is now a common option with effective software solutions as well as more intensive disk-level encryption.

Ultimately, behavioural analytics and data-loss prevention should be built on the foundation of a strong and well-maintained security policy that is frequently disseminated to employees and non-IT staff as part of an ongoing security intelligence campaign.


1 Gaudin, Sharon; ‘UBS Trial Puts Insider Security Threats At Center Stage’, InformationWeek, 12 June 2006,
2 Verizon RISK Team and US Secret Service, 2010 Data Breach Investigations Report, Verizon, 2010,
3 Op cit, UBS Painewebber
4 Ponemon Institute, ‘First Annual Cost of Cyber Crime Study’, Arcsight, 2010

Simon Moffatt, CISA, CISSP, MBCS, has more than 10 years of experience working in information security from a software vendor, consultancy and industry perspective, specializing in identity and access management, insider threat, role-based access control, and identity intelligence. Currently based in the UK, with experience gained in the EMEA (Europe, Middle East and Africa) and the US while working for Oracle Corp. and Sun Microsystems, Moffatt is also the founder and content manager of Infosec Professional, an online news and analysis magazine for information security professionals..

Enjoying this article? To read the most current ISACA Journal articles, become a member or subscribe to the Journal.

The ISACA Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to the ISACA Journal.

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT Governance Institute and their committees, and from opinions endorsed by authors’ employers, or the editors of this Journal. ISACA Journal does not attest to the originality of authors’ content.

© 2012 ISACA. All rights reserved.

Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, MA 01970, to photocopy articles owned by ISACA, for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.