Norman M. Sadeh, Ph.D.
More than 500 million phishing emails are sent every day. Although spam emails account for 70 percent of all email traffic, spam is mainly a nuisance.1 Phishing, on the other hand, can lead to costly security breaches. In the US alone, phishing attacks on customers have been reported to result in direct financial losses of several billion US dollars per year. However, for corporations and government organizations, this is just the tip of the iceberg as more targeted phishing emails (spear-phishing attacks) can lead to potentially devastating security breaches, loss of sensitive data and significant financial losses.
The following are a few examples of recent, high-profile phishing attacks:
A large majority of phishing emails sent today contain malicious URLs used to deliver malware or to entice users to disclose sensitive information, such as login credentials. For example, a recent report by Websense estimates that 92 percent of phishing emails rely on URLs to trick users.5
While most antispam/antivirus vendors have repurposed their filters to also catch phishing emails (this includes phishing emails with malicious URLs), vendor solutions rely primarily on manually maintained blacklists. Specifically, to minimize the risk of flagging legitimate sites, these blacklists typically include fraudulent URLs that have been manually vetted. They are always one step behind, often lagging by several critical hours.6 During that lag time, many phishing emails go undetected by spam filters, and many malicious web sites to which phishing victims are directed are not being flagged by their browsers—browsers rely heavily on blacklists too. Yet, studies have shown that during regular work hours, 50 percent of users who fall for phishing attacks read their email within two hours of it reaching their inbox.7 This percentage reaches 90 percent within eight hours. In other words, a lag in updating blacklists of just a few short hours can have devastating consequences. On cell phones, emails are typically read even faster, leading to higher levels of vulnerability.8
Reply-to phishing emails with no attachments and no links are another example of phishing attacks that often go undetected by antispam/antivirus filters. This is due in part to antispam filters that use simple bag-of-words techniques, which look for emails containing a combination of words that are indicative of spam. These techniques are successful at catching spam but are unable to differentiate between phishing emails and legitimate emails because phishing emails are crafted to look just like legitimate emails.
Other techniques such as sender reputation scores or recently introduced standards such as Domain Keys Identified Mail (DKIM), Sender Policy Framework (SPF), and Domain-based Message Authentication, Reporting and Conformance (DMARC) have little impact on the effectiveness of phishing attacks because phishers have found various ways to defeat these practices.9
Ironically, this unfortunate state is not entirely obvious to those who look at the statistics advertised by vendors promoting their antivirus/antispam filters. Many continue to boast about their ability to catch “up to 99 percent” of malicious email—a confusing statement that clumps together spam, viruses and phishing. Phishing attacks amount to approximately 0.5 percent of the traffic; thus, catching up to 99 percent of malicious email is an ambiguous statement. The consequences of finding an unfiltered spam email in your inbox cannot be equated to the potential consequences of receiving a phishing email. In other words, spam vendors are often comparing apples with oranges.
Additionally, vendors fail to tell us about the number of false positives they may end up flagging to reach the 99 percent performance they are boasting. False positives are those legitimate emails that are sometimes flagged as spam and moved to the user’s junk box, forcing the user to regularly check whether an important email found its way there by mistake. In truth, to reach 99 percent effectiveness, many spam filters would require settings that also lead to more false positives, which effectively reduces the value of the filter. This problem is best illustrated by the attack on RSA employees mentioned earlier. RSA’s filters had identified the malicious phishing emails. However, RSA employees had learned to regularly check their junk box because of prior false positives. In this instance, one RSA employee retrieved the phishing email from his junk box and opened the malicious zero-day attachment that had come with it, leading to the devastating consequences already described.10
When it comes to phishing, all emails are not created equal. The many simulated phishing campaigns launched by organizations on their employees have repeatedly shown that high-volume phishing emails claiming to come from well-established organizations, such as large banks, Internet service providers (ISPs) or the US Internal Revenue Service (IRS), are somewhat less likely to be opened. In contrast, spear-phishing emails that are typically directed at smaller groups of people, such as employees of a particular department or an individual, tend to be much more effective at fooling their recipients. These types of spear-phishing emails have been used by criminals to initiate many of the high-profile security breaches reported over the past few years, as well as many lower-profile or unreported attacks on organizations of all sizes. Statistics that look simply at percentages of phishing emails caught, including many of the easy-to-detect, high-volume phishing emails, fail to recognize these complexities and may, therefore, produce seemingly reassuring numbers that are skewed toward the least dangerous types of phishing emails. In fact, simulated spear-phishing campaigns used by organizations to assess the readiness of their workforce have shown that, without adequate training, as many as 40 to 50 percent of recipients fall for mundane spear-phishing email attacks.11, 12
Given this dilemma, it is time for industry and independent evaluation organizations to come up with benchmarks that reflect the significantly higher risk associated with phishing. Focusing on spam and viruses is no longer a fair assessment of the potential risk. The following should be considered:
Phishing has grown to become a major source of vulnerability with annual losses in the billions of US dollars. Its use by criminals is not limited to fooling customers. Instead, it has evolved into one of the top concerns for both corporate and government organizations.
In a recent study conducted jointly by Wombat Security Technologies and Virus Bulletin, a number of the top antispam/antivirus filters were shown to have missed more than one in four run-of-the-mill phishing emails.17 Therefore, it is no surprise that, rather than publishing the performance of their products on phishing emails, vendors continue to lump phish and spam together in one big general category. As many companies review their protection against phishing and consider new products, they should take advantage of free evaluation licenses and aim to evaluate these products on live emails. Performance on old collections of phishing emails can look deceptively reassuring, as many products continue to rely on manually maintained blacklists.
Also, a dedicated antiphishing filter can often help boost an organization’s existing defense without requiring it to replace its existing antispam/antivirus solution.
Next, because there is no silver bullet and no filter will ever be able to catch all phishing emails, the organization should start thinking of its employees as being part of the solution to phishing rather than a part of the problem. Significant advances in training using simulated phishing attacks and interactive training modules have been shown to drastically reduce the chance that an employee falls for one of these attacks.
1 Consumer Reports, “State of the Net,” 20122 Mak, Tim; ”White House Confirms Cyberattack,” Politico, 1 October 2012, www.politico.com/news/stories/1012/81847.html?hp=l43 Zetter, Kim; “Researchers Uncover RSA Phishing Attack, Hiding in Plain Sight,” August 2011, www.wired.com/threatlevel/2011/08/how-rsa-got-hacked/4 McAfee, Global Energy Cyberattacks: “Night Dragon,” white paper, 10 February 2011, http://media.scmagazineus.com/documents/21/mcafee_nightdragon_whitepaper__5179.pdf5 Websense, Defending Against Today’s Targeted Phishing Attacks, October 2012, https://www.websense.com/assets/white-papers/whitepaper-defending-against-todays-targeted-phishing-attacks-en.pdf6 Sheng, S.; B. Wardman; G. Warner; L. Cranor; J. Hong; C. Zhang; An Empirical Analysis of Phishing Blacklists, CEAS, 2009, http://ceas.cc/2009/papers/ceas2009-paper-32.pdf7 Kumaraguru, P.; J. Cranshaw; A. Acquisti; L. Cranor; J. Hong; M. A. Blair; T. Pham; “School of Phish: A Real-world Evaluation of Anti-phishing Training,” in Proceedings of the 5th Symposium on Usable Privacy and Security, Mountain View, California, USA, 15-17 July 2009, http://cups.cs.cmu.edu/soups/2009/proceedings/a3-kumaraguru.pdf8 Boodaei, M.; “Mobile Users Three Times More Vulnerable to Phishing Attacks,” Trusteer, January 2011, www.trusteer. com/blog/mobile-users-three-times-more-vulnerable-phishing-attacks9 Op cit, Websense10 Op cit, Zetter11 Op cit, Kumaraguru12 Wombat Security Technologies, “PhishGuru: Assess and Train Your Employees Using Simulated Phishing Attacks,” October 2012, www.wombatsecurity.com/phishguru 13 An example of such a product is Wombat’s PhishPatrol.14 Fette, I.; N. Sadeh; A. Tomasic; “Learning to Detect Phishing Emails,” Proceedings of the 16th International Conference on the World Wide Web (WWW2007), p. 649-656 15 Op cit, Websense16 Op cit, Wombat Security Technologies17 Grooten, Martijn; “VBSpam Comparative Review November 2012,” Virus Bulletin, 20 November 2012, https://www.virusbtn.com/virusbulletin/archive/2012/11/vb201211-vbspam-comparative
Norman M. Sadeh, Ph.D., is a professor of computer science at Carnegie Mellon University (Pittsburgh, Pennsylvania, USA). He is also cofounder and chief scientist of Wombat Security Technologies, a company that commercializes a suite of cybersecurity training software products and antiphishing filtering products that he originally developed with his colleagues at Carnegie Mellon University (Pittsburgh, Pennsylvania, USA).
Enjoying this article? To read the most current ISACA Journal articles, become a member or subscribe to the Journal.
The ISACA Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to the ISACA Journal.
Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT Governance Institute and their committees, and from opinions endorsed by authors’ employers, or the editors of this Journal. ISACA Journal does not attest to the originality of authors’ content.
© 2013 ISACA. All rights reserved.
Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, MA 01970, to photocopy articles owned by ISACA, for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.