ISACA Journal
Volume 5, 2,014 

Columns 

IS Audit Basics: What Every IT Auditor Should Know About Computer-generated Reports 

Tommie Singleton, CISA, CGEIT, CPA 

So many times, auditors of all types use a computer-generated report to perform some aspect of assurance. For example, financial auditors may pull a computer-generated list of accounts receivable (i.e., subsidiary listing) and use it to confirm receivables. IT auditors sometimes do the same thing with lists of access, logs or other reports relevant to IT audits. A popular use today is to generate data sets (a similar resulting object) to conduct data mining or data analytics.

It is tempting to look at a neat report that came from a computer and to have a “leap of faith” as to the veracity and reliability of the information of that report. Standard setters have realized the fallacy of that thinking and have issued guidance to auditors regarding computer-generated reports. The Public Company Accounting Oversight Board (PCAOB) inspection reports show that one major area of deficiency in financial audits of issuers is not gaining assurance regarding the accuracy and completeness of the report’s information.

The Goal

The goal has been stated by standard setters. It is the completeness and accuracy of the information in the report upon which the auditor is relying. Accuracy alone is insufficient. One needs to obtain assurance about both the completeness and the accuracy.

The US Government Accountability Office (GAO) uses data reliability to refer to the accuracy and completeness of data. They define data reliability as “sufficiently reliable data,” “not sufficiently reliable data” and “data of undetermined reliability.” A determination of data reliability should lead to the assessment of assurance on accuracy and completeness of a computer-generated report from these data, although it may be necessary to couple that with another test for report settings. It is possible for data to be reliable for one particular purpose but not reliable for another because of differences in data fields.

Why?

Consider the example of a financial audit. The financial auditor might use a key report from the information system (i.e., computer) as the key information or an important audit procedure. In this case, the reliance upon the information is critical to the conclusions about the assertion of the account balance, class of transactions or disclosure being tested.

Consider an IT audit. The same result is true. If the IT auditor is using a computer-generated list of credit card charges (or similar financial data), or a list of users and accesses, the conclusion after testing is highly dependent on the accuracy and completeness of the information being used.

Therefore, the IT auditor will want to first look at the computer-generated report and figure out why it is appropriate, with specificity, to rely on the report and why the completeness and accuracy of the report is reliable.

Procedures

There are generally two ways to gain assurance for completeness and accuracy. One is to compare the report to information or data external to the system and the other is to compare the report to the internal database.

The best way to get assurance from a computer-generated report is to compare it for completeness and accuracy against data/information independent of the computer system. For instance, if the entity produces something and has a standard rate or formula for billing, there is operational data to support the amount reflected in those billings. That information could be used for completeness and accuracy of a listing of billings by making some simple calculations. It is possible external information exists in other repositories as well. Other similar tests would include tests such as the following:

  1. Trace a suitable sample of transactions in the system to the source documents for accuracy:
    • Trace a suitable sample of source documents to the data for completeness.
  2. Look for existing internal tests, reconciliations or reviews of data and/or reports that could support accuracy and/or completeness of the data and/or report:
    • Reconciliations are valid only if variances are researched, explained and resolved in a timely manner.
    • A review of reconciliations can substantiate accuracy of the reconciliation.

When external information is not readily available, the comparison would need to be the report against the database in the system. The following are examples of how that can be accomplished:

  1. Take a suitable sample of transactions from the report and trace them to the internal transactions for accuracy. (Completeness would need to be a different test.)
  2. Test application control(s) over the transactions for completeness and/or accuracy depending on the nature of the control(s).
  3. Test internal controls where the purpose is to ensure data reliability on a target data file.
  4. Examine the report settings, especially queries and custom reports, for correctness as to completeness. Accuracy may need to be assessed via a different test:
    • Test for the data file being used to make sure it is the appropriate one.
    • Test filters being used for types of transactions, dates, etc.
  5. Test the change management process for the report, if applicable.
  6. Use data analytics to determine the reliability of the underlying data:
    • Test key fields to identify issues with the fields that would materially affect accuracy and/or completeness. For example, verify that all sales records contained valid types of services.
    • Establish some criteria for expected results and compare to actual results for accuracy.
    • Consider separate verification of proper settings for the report itself.

Sometimes a test performed might provide assurance for completeness but not accuracy, and vice versa. For instance, in confirming receivables, the auditor may not have assurance of accuracy and completeness over the list of subsidiary accounts and decide to confirm a high percentage of accounts (for example, 80 percent) as a compensating test. However, that test only confirms accuracy and not completeness.

Also, a paperless transaction or system will not have source documents from which to test data or the report. In the case of the latter, internal controls are critical to obtaining assurance about accuracy and completeness.

Finally, sometimes one cannot attain sufficient assurance about the accuracy and completeness of the data and report, as indicated by the ratings the GAO uses for data reliability. When that happens, what do auditors do with the report?

They select an alternative approach. For instance, there are two ways to confirm receivables. The first, confirmation letters, uses a list of subsidiary accounts. If accuracy and completeness of that list cannot be attained, the alternative confirmation is subsequent payments.

Conclusion

With the combined growth in computer-generated reports, and the growing attention by reviewers and standard setters on the accuracy and completeness of reports used in audits, there is a need to understand the situation and to develop a framework for obtaining that assurance. Obviously, the key is to, first, use a valid source for testing and, second, obtain assurance for both.

Tommie Singleton, CISA, CGEIT, CPA, is the director of consulting for Carr Riggs & Ingram, a large regional public accounting firm. His duties involve forensic accounting, business valuation, IT assurance and service organization control engagements. Singleton is responsible for recruiting, training, research, support and quality control for those services and the staff that perform them. He is also a former academic, having taught at several universities from 1991 to 2012. Singleton has published numerous articles, coauthored books and made many presentations on IT auditing and fraud.

 

Add Comments

Recent Comments

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and from opinions endorsed by authors’ employers or the editors of the Journal. The ISACA Journal does not attest to the originality of authors’ content.