Markus Pfister, CISA
Following the release of Aleph One’s famous paper “Smashing the Stack for Fun and Profit” in 1996,1 buffer overflows have been one of the top most dangerous software vulnerabilities.2
But why should the IS auditor care about buffer overflows?
Buffer overflows are the tool of trade to get access to a computer system. While phishing attacks require user interaction, since an unheeding user is tricked into opening malicious content allowing an attacker to access the user’s computer, buffer overflows work without a user being involved. From a hacker’s perspective, this is a less risky attack than phishing, as the hacker can avoid the risk of getting noticed by alert users. Furthermore, if a process with high privileges can be attacked (exploited), the attacker may be able to inherit the privileges of the attacked process using buffer overflow techniques.
Buffer overflow exploits are readily available on the Internet. For an attacker, it can be as simple as doing some exploration to determine what programs run on a potential victim system and checking for relevant buffer overflow exploits. Buffer overflow vulnerabilities found by an individual hacker who does not share the knowledge with the vendor are deadly. In such a case, a zero-day exploit allows the attacker to strike without giving the vendor of the vulnerable program a chance to fix the vulnerability.
To mitigate the risk, IS auditors should have a general knowledge of buffer overflows and how they work. This article explains the basic concepts of buffer overflows and presents seven points IS auditors should consider in their audit programs.
What is needed to create a buffer overflow?
First, a vulnerable program has to be found. This is done via fuzzing—passing oversized input to a program and using a debugger to examine how the program reacts. If the oversized input causes the program to crash, chances are high that a buffer overflow vulnerability can be exploited. This is because the input overwrites the program’s internal memory, causing the program to crash.
Second, shell code is needed. A shell is a window on a computer system that allows users to type commands such as “dir(ectory)” or “copy.” Shell code is assembly code that opens the shell window when the code is invoked. In other words, shell code gives the attacker means to execute commands on the exploited computer system.
Last, the shell code provided by the attacker must be executed through a buffer overflow. This will provide the attacker access to the exploited system.
For example, inside a maintenance program for account holder names and corresponding accounts, the programmer provides space for “holder” and “account.” The program writes these values to the database. The account data are stored right after the holder data in the computer memory, as shown in figure 1.
Entering “John Doe” would change the holder to “John Doe,” as expected from the program. However, what if someone typed the following, passing 20 characters instead of the expected 10 character?
The result depends on the security of the program code. If the program checks that only 10 characters are allowed for holder (length check), the input is refused or truncated to the allowed size. If the programmer forgot this test, the memory following holder, in the example account, is overwritten with “0000004711,” resulting in the change illustrated in figure 2.
Since the program writes the input to the database, John Doe’s account number is now 4711.
Although this looks innocent, the attacker was able to change the program from the outside. If shell code is passed to the maintenance program, for example, the attacker might gain access to the computer on which the maintenance program runs.
How can the attacker pass malicious code (the shell code) to a vulnerable program? Input is provided through arguments (such as invoking notepad with an additional file name)—environment variables that the program reads in (e.g., the language) or through advanced hacking techniques.3 Normally, an attacker needs access to the local machine to provide arguments to the vulnerable program or change environment variables.
Many network-based services (e.g., Web Services, FTP, Telnet) or custom network programs accept remote input over the network. Remote exploits refer to programs having exploitable vulnerabilities accessible over the network.
Programs accessible from the Internet or some other network should attract the IS auditor’s attention since attacks are possible from every computer in the network.
As mentioned, most buffer overflows aim at gaining a shell on the exploited system. For this purpose, a lot of ready-to-use shell codes are available on the Internet. After a successful exploit, an attacker might connect to the exploited server (bind shell) or have the exploited server connect back to the attacker’s computer (reverse shell) (figure 3). Firewalls control the traffic to and from a computer system, acting as traffic cops. It is common that connections originating from a computer within a company network are considered more trustworthy than incoming, external connections from the Internet. Thus, an attacker prefers a reverse shell.
Shell code can be detected by an intrusion detection system (IDS). Shell code obfuscation (hiding) is a challenge to an IDS, especially when the shell code is polymorphic (changes its appearance).4
IS auditors should be aware that IDS may help raise the bar for an attacker, but there is no guarantee that an attack will be detected.
Without going too deep into detail, buffer overflows aimed to execute attacker-provided code work as follows (figure 4): Input (malicious code) provided by the attacker (1) is read by the program and placed into the data fields defined by the programmer without properly checking the length of the input. This allows the attacker to place data into the program’s memory.5 The goal is to change the program flow so that the shell code provided by the attacker gets executed. Functions are called many times in programs, e.g., to copy data. When the function has done its work, the next program statement to execute is the one stored in the function’s return (RET) address. If the attacker suceeds in overwriting the RET address (2) with the start address of the shell code (1), this shell code is executed on function return (3).
To increase chances to hit the shell code, no-operation instructions (doing nothing) are prepended to the shell code. If one of the no-operation instructions is hit, the shell code is executed after all the no-operation instructions are processed.
Since the mechanisms of how to return from a function are always the same, validating checks can be inserted into code to verify that the RET address has not been changed. This is what is done by the compiler-defined safeguards discussed later in this article.
Because an attacker is interested in high privileges, buffer overflows are used to attack programs that already run with high privileges, such as root or admin. Thus, the executed shell code may inherit these privileges.
IS auditors should pay attention to programs that run with high privileges because they are primary targets.
To counter the vulnerabilities caused by buffer overflows, compilers and operating systems offer built-in mechanisms to make attacks harder. IS auditors should know about them and include checks in their audit programs.
Address Space Layout RandomizationAddress Space Layout Randomization (ASLR) randomizes the address (memory) space of a program every time the program is run. Without ASLR, an attacker can determine in advance the location of the RET address as well as the address (start of the shell code) that must be written to the RET address. Techniques include debugging, probing and analyzing the executable program file. Changing the address space every time the program runs counters exploits based on fixed addresses.
Now, only two types of exploits have a chance to succeed: brute-force attacks and attacks that use addresses relative to the randomized addresses.6
ASLR should be enabled at the operating system level (i.e., UNIX derivates, Windows). IS auditors should consider this in their audit procedures.
Compiler FeaturesCompilers offer features to protect memory from buffer overflow attacks. For example, they detect when the RET address or program variables are overwritten.7
IS auditors should ensure that programs are compiled with stack protection enabled (e.g., -fstack-protector-all and –D_FORTIFY_SOURCE8 for the GNU compiler). C# programs should use safe methods, disabling the classic pointer type widely used in C/C++.
Nonexecutable Stack and HeapNonexecutable stack and heap prevents attack code passed to a vulnerable program from being executed. Modern CPUs support this feature by marking memory as “never executable” (NX). Microsoft coined the term “data execution prevention” (DEP).
IS auditors should ensure that programs accepting remote input have nonexecutable stack/heap protection enabled.
Sandboxes and VirtualizationThe last and most effective protection mechanism is separating applications from each other using virtualization. Code may run in a “sandbox” (e.g., Java, .NET), limiting a successful exploit to the sandbox. With containers, many instances of an application—regardless of the programming language with which it was written—may run on the same hosting server in a “jail,” separated from each other.
IS auditors should audit the following points:
Part of the fascination in the security area is the ongoing challenge between people trying to exploit software and those providing countermeasures.10
A critical step in the audit process is to determine the applications and servers that might be subject to buffer overflow attacks and assess the associated risk. The following questions may help IS auditors find the appropriate answers:
Buffer overflows are a serious threat. However, many protection and prevention mechanisms are available and their usage should be audited.
While it is not enough to rely solely on the protection mechanisms presented in this article, they help raise the bar for an attacker. Virtualization, used correctly, further confines the attack to defined security perimeters.
The best countermeasure is to eliminate buffer overflows—using development tools that assist programmers detecting pitfalls or using programming languages and runtime environments that are (relatively) immune to buffer overflow attacks.
Most important, developers must be trained to know the implications of buffer overflows and how to avoid them, and IS auditors should understand the threats of buffer overflows and act accordingly in their audit programs.
1 Aleph One, “Smashing the Stack for Fun and Profit,” 1996, http://insecure.org/stf/smashstack.html2 CWE/SANS ranks buffer overflows third. The MITRE Corporation, “2011 CWE/SANS Top 25 Most Dangerous Software Errors,” 2011, http://cwe.mitre.org/top25/3 Scut/Team Teso, “Exploiting Format String Vulnerabilities,” 2001, http://crypto.stanford.edu/cs155old/cs155-spring08/papers/formatstring-1.2.pdf4 Song, Yingbo, et al.; “On the Infeasibility of Modeling Polymorphic Shell Code,” 2009, Springer, www.cs.columbia.edu/~angelos/Papers/2010/polymorph-mlj.pdf5 This is the typical strcpy (bufferToOverflow, inputFromUser) programming flaw. Before “inputFromUser” is copied (this is what strcpy does), it should be cleaned (untainted) from dangerous injections and truncated to the size “bufferToOverflow” can hold.6 Learn more about this in: Muller, Tilo; “ASLR Smack & Laugh Reference,” 2008, http://users.ece.cmu.edu/~dbrumley/courses/18739c-s11/docs/aslr.pdf7 IBM, “GCC Extension for Protecting Applications From Stack-smashing Stacks,” www.research.ibm.com/trl/projects/security/ssp/8 With the GNU compiler FORTIFY_SOURCE setting specialized, versions of system calls known as notorious causes for buffer overflows are used. A system call such as “copy memory from A to B” limits the amount of copied memory to the size of the target memory location.9 Da, Andrew; “Exploring the .NET Framework 4 Security Model,” http://msdn.microsoft.com/en-us/magazine/ee677170.aspx10 The following document contains an excellent chart depicting the advances in circumventing protection mechanisms against buffer overflows. The chart demonstrates that buffer overflows are evolving over time and that there does not seem to be an end to it in the near future. Syssec, “Deliverable D7.1: Review of the State-of-the-Art in Cyberattacks,” 2011, p. 14, www.syssec-project.eu/media/page-media/3/syssec-d7.1-SoA-Cyberattacks.pdf11 For an in-depth discussion of virtualization threats, refer to: Markus Pfister, “Risk Mitigation in Virtualized Systems,” 2008, www.isaca.ch/home/isaca/files/Dokumente/04_Downloads/DO_04_Diplomarbeiten/Diplom_Risk_Mitigation.pdf
Markus Pfister, CISA, works as an IT auditor and security consultant and is a guest teacher at the Lucerne University of Applied Sciences (Lucerne, Switzerland) for virtualization topics, where he studied information security. He was one of the developers of the COAST C++ framework and specialized in developing reverse proxy servers based upon this framework. Pfister’s interests include ethical hacking and buffer overflows.
Enjoying this article? To read the most current ISACA Journal articles, become a member or subscribe to the Journal.
The ISACA Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to the ISACA Journal.
Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT Governance Institute and their committees, and from opinions endorsed by authors’ employers, or the editors of this Journal. ISACA Journal does not attest to the originality of authors’ content.
© 2013 ISACA. All rights reserved.
Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, MA 01970, to photocopy articles owned by ISACA, for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.