JOnline: The Art of Database Monitoring 

Download Article

Most of a company’s business-critical data is stored in databases. Losing data confidentiality, availability or integrity can cost a company seemingly countless revenue in sales, reputation and litigation costs. Best practices and, in many cases, government regulations mandate the use of controls to adequately safeguard business data.

This article describes why a company should (or must) use database monitoring as a vital part of its security controls and how it should go about implementing it.

Sources of Data Breaches

Figure 1According to a 2007 study,1 85 percent of businesses have experienced a data security breach. The survey also found that about 23 million adults had been notified that their data were compromised or lost; of those, 20 percent terminated their accounts immediately after notification, while another 40 percent were considering termination at the time of the survey. The estimated breaches cost US $182 per compromised record; data breaches remain the leading cause of financial losses. Using data from Privacy Rights Clearinghouse, a web site devoted to maintaining a record of all data security breaches in the US, laptops are the number one source of data breach incidents (47 percent), databases are next (40 percent), then tapes (11 percent) and e-mail (2 percent).2 Looking at the same data based on the amount of data lost (see figure 1), databases are the number one source (64 percent), laptops are next (25 percent), then tapes (10 percent) and finally e-mail (1 percent). Monitoring is fairly common at the network layer, but monitoring at the application layer remains relatively rare. A survey conducted by Application Security and the Ponemon Institute at the 2007 Gartner IT Security Summit revealed that 40 percent of companies are not monitoring their databases for suspicious activity. According to the survey of 649 IT professionals (60 percent in chief information officer [CIO] or chief technology officer [CTO] positions), 78 percent of respondents said their databases are critical or important to their business and contain customer data.3 It is clear that, while loss of data through corporate databases represents a major risk for most organizations, very few have adequate controls in place to monitor data policy violations or attacks. This may be due to the fact that using the raw auditing capability for monitoring has been associated with performance degradation and there have been few viable alternative solutions.

Regulatory Requirements for Monitoring

The requirement to monitor log files is highlighted within best practices such as Control Objectives for Information and related Technology (COBIT) and industry requirements such as the Payment Card Industry (PCI) Data Security Standard (DSS). Some of the major regulations/requirements—PCI, US Health Insurance Portability and Accountability Act (HIPAA), Title 21 Code of Federal Regulations (CFR) Part 11 of the US Federal Drug Administration guidelines, US Gramm-Leach- Bliley Act, North American Electric Reliability Corporation (NERC)’s Critical Infrastructure Protection (CIP) standards, US Federal Information Security Management Act (FISMA) and US Sarbanes-Oxley Act—and the pertinent log file requirements are compared in figure 2.

Figure 2The US Securities and Exchange Commission approved the Public Company Accounting and Oversight Board (PCAOB)’s Auditing Standard No. 5 on 25 July 2007. Auditors use it to assess management’s internal control over financial reporting in complying with the Sarbanes-Oxley Act.

As a result of these regulations, there is a growing focus on using enhanced continuous control monitoring (CCM) and continuous control auditing tools, which should reduce compliance costs and provide business efficiency.4 One way to automate application controls that are being checked manually is to use the information in log files to implement CCM.

Monitoring as a Service

Most network devices support the Syslog protocol to redirect their log files to a central log server. Applications and databases generally do not support Syslog, although Oracle 10g release 2 now supports writing audit records to the operating system using a Syslog audit trail and other database vendors should follow suit.5 Databases generally store their audit information in a table within one of their system databases. An agent or script can be run to “watch” this table and convert any entries that are written to the audit table into Syslog, so it can be forwarded to a central logging server. Instead of using the native auditing capability of the database, which may impact performance, it is possible to use database application monitoring appliances to monitor the database for activity (this is described in more depth later in the article). The majority of these appliances support Syslog, enabling the information generated to be forwarded to a central log server.

It is possible to outsource monitoring to a managed security provider, enabling experts to help set up a robust and flexible monitoring infrastructure. Managed security monitoring (MSM) providers aggregate, correlate, analyze and store the log data to give organizations overall visibility into their network security and work with customers to improve their incident response. At the same time, MSM providers can help satisfy auditors as they provide a level of objectivity and are experienced in producing reports that auditors require. Regulatory pressures—from legislation such as Sarbanes-Oxley and HIPAA to individual industry requirements—make log management and visibility into user access of systems and applications critical.

Database Monitoring Solutions

The solutions for monitoring databases are relatively new and yet form an important component in the drive toward automated CCM. The ability to be alerted if there is a violation of policy at the application layer is extremely important; it may be perfectly normal for a user to look at one credit card record, but an alert should be generated when someone accesses or downloads 90 million credit card records and associated account information. The requirement to protect data in this fashion can be expressed through a policy; for instance, there should be no access to the data unless the user accesses the system through an approved application and, hence, is constrained by the segregation of duties and controls within the application. This is an extremely important rule that requires monitoring, reporting and, if necessary, alert generation.

The requirement for database protection is further exacerbated by home-grown applications. Good development practices reduce the number of coding errors, but they do not remove them. Limited resources, human error or insufficient testing results in applications being deployed with serious vulnerabilities. The burden of security cannot lie simply with the programmers; there is a vital need to move from network monitoring into the application layer. Industry analysts predict the growth of database application monitoring (DAM) and DAM appliances, and expect them to become as commonplace as intrusion detection systems technology is today.

Most large organizations are struggling to know what data they have and where the data are located, making it virtually impossible to protect the information. Database monitoring projects need to start by determining what data are out there and what applications are touching the database. Then, the policy can be built from there. It is not a trivial question to know where data are, so some of the database monitoring vendors have a discovery capability built into their appliance, which looks for any SQL traffic and highlights what databases there are and which ones are storing sensitive data.

“The database itself is not intelligent enough to see suspicious activity over the wire or if an authorized user is executing a command a million times, that is why you have to have these tools,” said Noel Yuhanna, principal analyst at Forrester Research.6

Traditionally, enterprises have built tools and operational procedures to monitor database activity. This usually leverages some form of the auditing capabilities packaged with the database itself and entails writing some tools around those. This approach, over time, has become insufficient, as it is not policy-driven, selective, scalable and manageable across the heterogeneous data environments that exist today.

An emerging group of hardware and software vendors have begun to address this problem with out-of-the-box solutions. These products have a variety of approaches to addressing the problem of monitoring database activity. There are three fundamental approaches to addressing this issue:

  • A software-only approach—Database activity monitoring with a software agent requires installing and configuring software on each database host. This approach typically requires turning on some level of native database auditing from which the software agent gathers data. The challenge with this approach is environments experiencing many database transactions may object to enabling native database auditing, as it may impact overall database performance. Many software-only solutions do not have a centralized management, reporting and configuration console.
  • A network appliance—A relatively new approach to database monitoring is to use a network appliance to monitor database traffic. These appliances either run as passive devices connected to a mirroring or Switched Port Analyzer (SPAN) on a switch, or act as in-line devices, i.e., essentially database firewalls. The primary difference is that those appliances acting as in-line devices are a point of failure in the data environment. Any in-line device that goes down could result in lost business or database transactions and general application downtime. Most vendors support either in-line or passive modes. In-line mode may be regarded as most secure, since there is no way a hacker could bypass the appliance to gain access to the database. Passive devices tend to be simple to deploy, not impacting the data environment in any way.
  • A combination of the above—A combination of network appliance and local software auditing is an ideal way to address data activity monitoring in an enterprise. This maximizes the overall coverage of the auditing solution. Network database traffic can be captured by the network appliance, and local host database traffic (e.g., database administrator access to the database directly) can be audited through the local software agent or native database auditing. The collection of this data is centralized and analyzed in one place, ensuring centralized policy management, reporting and configuration. The majority of the network-based devices function simply by watching the SQL traffic and interpreting the activity. Though SQL is a standard, various vendors’ implementations are slightly different, so the database monitoring devices are platform-specific. It is best to select a device that supports all of the database platforms in-house, so there is no danger of ending up with multiple monitoring platforms and any problems that are inherent in having diverse solutions.

None of the approaches, however, monitors local access, as only network traffic is being observed.

Database Monitoring Limitations

There are four main problems with database monitoring:

  • Stored procedures and triggers
  • Encrypted network traffic
  • Connection-pooled environments
  • Support for MSM or security incident and event management (SIEM) systems

Stored procedures are a key part of an overall database management system (DBMS) architecture. Auditing and monitoring stored procedures can be a challenge to some vendors, as stored procedures generate SQL at runtime. The stored procedure is a database object and is executed inside the container of the database, which means visibility of the SQL is available only inside the database, not over the network. Monitoring the SQL contained within a stored procedure requires local monitoring. Network-based monitoring can see that the procedure was called and the results of running the stored procedure, but not what commands were used within the procedures. By ensuring that the correct change control is in place with tight controls on the changes to applications in the production environment and generating reports on the creation of new stored procedures, the fact that the individual SQL statements are not audited within a stored procedure may not pose a risk. Triggers have the same challenge as they are native to the database. Native auditing is the only way to know the details of what is happening inside stored procedures and triggers.

Another database auditing challenge is when the data in the database are encrypted: network-based monitoring cannot read the encrypted SQL on the wire.

Database monitoring appliances can support these challenges only by using an agent or agentless local auditing. The vendor may provide an agent to read the raw audit data from the database itself and then forward the information to the appliance. The agent most likely sets up finely tuned audit policies to monitor local access by privileged users and other events that the organization needs to audit. The agentless solution still requires local auditing to be turned on but, instead of software running on the host to forward the events to the database monitoring solution, the appliance logs in to retrieve the audit information. This highlights the need for a three-way auditing approach—one that leverages the selective native audit capabilities of the database, a network appliance and, where necessary, a software agent.

Another problem with database monitoring lies with connection-pooled environments, where authentication information is not being passed along to the database. The application uses a configured account on the web server to communicate with the database server. In the database log or in the traffic, the name of the configured account used by the web server is shown rather than the actual user ID. Wellwritten web applications set some kind of client ID, sending the login details of the account along with the other details. This enables the actual user ID to be shown in the log files and on network monitoring devices. Home-grown applications or applications that have not used a facility to set the client ID do not record the actual user’s login details. Some vendors have solved this problem by creating agents installed on the web servers and communicating with the monitoring devices to ensure that the user’s identity is recorded. MSM service providers can correlate web log files with the database log files, and information such as the SPID (the SQL server’s internal process ID) enables the username to be identified.

Most DAM solutions have not been developed with supporting existing monitoring frameworks in mind. The alerts and messages that come from the appliance are not coded in any way or intelligible to anyone other than the database administrator. The fact that there is no clear labeling of the type of alert detected means that an SIEM or an MSM service cannot take the message and correlate it with other log messages to be able to identify the alert as a symptom of a wider attack. A DAM tool that is unable to integrate into a wider monitoring framework would result in a fragmented approach to monitoring, which would create islands of SIEM tools.

Common actions that a monitoring appliance can be configured to take include blocking transactions that violate policy using TCP reset, automatic logouts of users or shutting down the virtual private network.

Attack Recognition

Attack recognition generally is done by pattern matching, anomaly detection or rule creation.

Pattern matching refers to the method of being able to match patterns of data. Examples where this is most commonly used include (US) Social Security and credit card numbers. Some database appliances have credit card validation code in them to reduce false positives.

Anomaly detection or behavioral fingerprinting refers to behavior that the monitoring software defines as “not normal.” Most database appliances are put into an observation mode for a length of time, so that they can baseline activity. Database activity tends to be normalized, so this kind of alerting tends to be reasonably accurate. Some platforms have a feature called “intelligent learning” where it learns new sets of behavior. This feature is known as behavioral dynamic and normally is done in a 30-day rolling period.

Rule creation allows the organization to define rules according to its policy of what information should be monitored. It would be possible to report on additions or deletions to the master vendor table to ensure that enterprising employees have not added themselves in as a supplier.

Signature-based attacks refer to SQL commands that are associated with common attacks. Mastercard lists SQL injection attacks as being the primary source of credit card security breaches.7 In an SQL injection attack, the user uses union attacks, and the monitoring tool detects the SQL statements commonly associated with this attack.

The way the data are reported from the monitoring platform is extremely important. It is crucial to build a monitoring framework. Network devices should send their log files to a central log collector, and the information should be used within a security incident and event environment. Security breaches generally involve more than one system, and centralizing log files enables the managed security monitoring service to correlate the information and look for attacks across multiple platforms. For example, if an SQL injection attack starts out at the web server and then comes out of the database, monitoring only one layer or the other does not give the full picture.

It would increase costs and complexity by having a monitoring solution for every platform. The database monitoring solution that an organization selects should be able to plug into the existing monitoring framework. The monitoring framework should be scalable and support multiple platforms, including applications. Monitoring is a 24x7, 365-day-a-year service.

The majority of business-critical data is stored in databases. Applications are written by human beings and, therefore, no matter how good the development practices are in an organization, they are subject to human error. It is insufficient to rely on change control and application testing to ensure that applications are not vulnerable to attack. Good development practices do not necessarily protect organizations from risks that result from a violation of policy or an internal breach. Security must be approached from a layered defense approach, and the final layer must always be monitoring. It is imperative for organizations that need to protect personal or confidential data to build a monitoring framework that extends beyond the network layer into the application layer, and DAM is a key component within the framework.


1 Ponemon Institute, Survey on the Business Impact of Data Breach, commissioned by Scott & Scott LLP, and

2 Privacy Rights Clearinghouse, chronology of data breaches,

3 Gaudin, Sharon; “Despite Deluge of Data Losses, 40% Don’t Monitor Databases,” InformationWeek, 5 June 2007, /showArticle.jhtml?articleID=199900995&cid=RSSfeed_IWK_News

4 Commmittee of Sponsoring Organizations of the Treadway Commission, Internal Control—Integrated Framework, Guidance on Monitoring Internal Control Systems, September 2007,

5 Oracle support for Syslog can be found at network.102/b14266/whatsnew.htm#i970212.

6 Roiter, Neil; “Compliance, Data Breaches Heighten Database Security Needs,” Information Security Magazine, 16 August 2007

7 MasterCard Worldwide, Site Data Protection, Program Update, 9 October 2006,

Author’s Note:

The author would like to thank Scott B. Smith and Jenna Sindle for editorial and technical assistance.

Sushila Nair, CISA, CISSP, BS 7799 LA
is a product manager at BT Counterpane, responsible for compliance products. Nair has 20 years of experience in computing infrastructure and business security and a diverse background including work in the telecommunications sector, risk analysis and credit card fraud. She has worked with the insurance industry in Europe and the US on methods of quantifying risk for e-insurance based on ISO 27001. She was instrumental in creating the first banking group in Malaysia focused on using secondary authentication devices for banking transactions. Nair has worked extensively with customers of BT to develop monitoring solutions that meet the needs of regulatory compliance, including that of the Payment Card Industry Data Security Standard.

Information Systems Control Journal, formerly the IS Audit & Control Journal, is published by the ISACA. Membership in the association, a voluntary organization of persons interested in information systems (IS) auditing, control and security, entitles one to receive an annual subscription to the Information Systems Control Journal.

Opinions expressed in the Information Systems Control Journal represent the views of the authors and advertisers. They may differ from policies and official statements of the Information Systems Audit and Control Association and/or the IT Governance Institute® and their committees, and from opinions endorsed by authors' employers, or the editors of this Journal. Information Systems Control Journal does not attest to the originality of authors' content.

Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, Mass. 01970, to photocopy articles owned by the Information Systems Audit and Control Association Inc., for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.