JOnline: The Information Security Assessment and Evaluation Methodologies: A DoD Framework for Control Self-assessment 

 
Download Article

The control self-assessment (CSA) methodology employed by an organization—in the experience of this author—is rarely the same methodology as that employed by external or third-party assessors. The US Department of Defense (DoD) is no exception.

Assessment of the information security posture of National Security Systems (NSS) within the DoD is normally performed as part of the certification and accreditation (C&A) process during which a baseline set of security controls— defined by the criticality and confidentiality requirements of the information processed—is applied and certified.1 Certification is used by the designated approval authority to determine whether the residual risk to national security presented by system operation is acceptable or if additional controls should be applied. After formal authority to operate a system is given, responsibility for managing residual risk is transferred from the certification authority to the operational commander2 and is generally performed through some sort of self-assessment.

While there is no formal methodology prescribed by the DoD for this type of CSA, the US National Security Agency (NSA) has developed third-party information security assessment methodologies specifically for NSS.3 When performed together, the INFOSEC Assessment Methodology (IAM) and INFOSEC Evaluation Methodology (IEM) provide viable, repeatable assessments of the administrative, operational and technical controls used to mitigate operational risk.4

Risk Management

The US Federal Information Security Management Act (FISMA) of 2002 changed the way all US federal departments and agencies manage information technology (IT) risk by specifically mandating the use and assessment of security controls in all stages of the system acquisition life cycle.5 Within the DoD, a minimum baseline of security controls is prescribed for each IT system based on mission assurance and confidentiality requirements and assessed as part of the C&A process.6 However, current policy does not provide nor recommend an IAM for third-party control assessment or selfassessment. While FISMA gives the US National Institute of Standards and Technology (NIST) the responsibility for “developing standards, guidelines and associated methods and techniques…for providing adequate information security for all agency operations and assets,” the Act specifically excludes NSS.7 Guidance for IT risk assessment of these systems remains under the purview of the NSA.8, 9

Controls Assessment

Control assessment is handled differently before and after the accreditation decision is made and the system begins operation.10 Preaccreditation control assessment is normally performed by a third-party expert on behalf of the system acquirer (not necessarily the system owner). Postaccreditation assessments, however, are specifically geared to address maintenance of the system security profile; are typically the responsibility of the system owner; and are performed by third-party experts, internal reviewers or a combination of the two.11, 12 “Expert assessment” refers to assessment methods characterized by a high level of technical skill and professional tools used to determine the security profile of the IT system. In contrast, self-assessments are characterized by a generally less technical approach but a more complete knowledge of the target system(s).13

Classification Scheme

This distinction is further detailed by the following classification scheme for various assessment methods based on “abstraction level” and “approach”:14

Abstraction refers to a method’s level of detail and is divided into three categories: organizational, expert and collaborative.

  • Organizational defines a level of detail traditionally suitable for CSA.
  • Expert-level assessments are defined as above.
  • Collaborative is a mixture of organizational- and expert-level assessments.

The approach taken by a given method is similarly defined by three categories.

  1. Temporal—A one-time snapshot of the system’s security profile based on real-time tests or attacks
  2. Comparative—An evaluation of a system’s security profile against an explicit standard
  3. Functional—A blend of the temporal and comparative approaches

One of the major attributes of functional methods is the focus on specific risk, i.e., “specific threats, vulnerabilities, assets and countermeasures.”15 Figure 1 presents the classification of nine of the methods evaluated, which serves to illustrate the relative placement of the IAM as an expert-level, functional assessment methodology.

Figure 1 - Classification Matrix

The nine methods presented in figure 1 are described by approach from the left and abstraction level from the top of the table:

  • Red team refers to an adversarial engagement approach in which a highly trained individual or group of individuals attempts to discover and exploit vulnerabilities using the same tools and techniques employed by an adversary.16 Although considered the third part of the information security assessment triad, NSA has not published a red team methodology as part of the INFOSEC Assurance Training and Rating Program (IATRP).17
  • Penetration (pen) testing is an example of an exercise method that uses the same tools and techniques as red team assessments but in cooperation with the system owner.18
  • Script methods refer to the use of automated assessment tools that do not require a high level of expertise, i.e., they can be used by organizational personnel without expert assistance.19
  • NSA produced the IAM and IEM as part of the information security assessment triad to provide a comprehensive methodology for the complete assessment of a system’s organizational and technical security posture.20 Sequence methods such as the IAM are characterized by their riskdriven approach and are considered “the epitome of abstract methods.”21 While the IEM was not evaluated as part of the study, it is reasonable to categorize it at the abstract level due to its similarity with the IAM and as a temporal method due to its expert use of vulnerability assessment tools. Its similarity to and reliance on the IAM would indicate that it is also risk-specific.
  • Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE) is an asset-driven, risk-based assessment method whereby the organization being evaluated provides direction to the assessment team conducting the evaluation.22
  • Developed on behalf of the British government, the Central Computer and Telecommunications Agency (CCTA) Risk Analysis and Management Method (CRAMM) is a matrix method that assesses risk based on responses to a questionnaire and provides an extensive list of countermeasures.23
  • The Information Systems Security Association (ISSA) publishes a set of Generally Accepted Information Security Principles (GAISP) to “…offer a translation of existing regulations, standards and generally accepted practices into logical strategy and detailed tactics that can be addressed by any organization.”24
  • In response to FISMA, NIST is modifying and expanding the 800 series of special publications (SP) to address a wide range of information security issues—focused on security risk—in the acquisition, design, development and operation of IT systems.25 In particular, draft SP 800-53A describes a best practice method for control-based assessment.26, 27
  • International Organization for Standardization (ISO) 17999, based on British Standard (BS) 7799, specifies standards for 10 defined areas within information security.28 ISO 17999 is a prime example of an audit method that compares an organization’s information security posture against a specific standard.29

The NSA Methodologies

IAM and IEM were created by NSA as part of the IATRP, which provides training on the methodologies and a capability maturity model (CMM)—based on the Systems Security Engineering CMM (SSE-CMM) developed by the Information Systems Security Engineering Association—for the organizations employing the methodologies. Use of IAM is “gaining momentum” within the DoD and is taught at the National Defense University as part of the chief information officer (CIO) certification program. NIST is also incorporating elements of IAM and IEM into its own special publication on C&A.30, 31

IAM and IEM provide repeatable processes for assessing the criticality of an organization’s information and the security controls implemented to protect the information. In fact, IAM provides the necessary foundation upon which IEM can be used to assess the technical controls implemented by an organization. Although designed to be conducted separately and in sequence, this is not always feasible and quite often they are conducted simultaneously. Combining IAM and IEM can be described in three phases: preanalysis, analysis and postanalysis.32

As related to CSA, preanalysis activities concentrate on assessing/confirming the criticality of the organization’s information and the systems upon which it resides, scoping the extent of the analysis, and obtaining as much information about the organization’s security posture as is available. Analysis is centered on assessment of the organization’s nontechnical controls via the collection of information through questionnaires, surveys, interviews, and similar instrumentation and documentation, as well as observation and demonstration.33 Analysis also includes conducting an extensive evaluation of the technical controls applied by the organization.34 Postanalysis is concerned with final analysis and reporting the results and recommendations for improvement, if appropriate.35, 36

One of the distinguishing characteristics of these methodologies is that NSA does not allow for the tailoring of certain activities used for the assessment or evaluation. IEM, for example, expressly requires that the technical evaluation address 10 specific baseline activities: port scanning, Simple Network Management Protocol (SNMP) scanning, enumeration and banner grabbing, wireless enumeration, vulnerability scanning, host evaluation (vice network-level evaluation), network device analysis, password compliance testing, application specific scanning, and network sniffing.37 In both methodologies, the actual methods and tools used are not specified. Thus, an assessor can adapt to the organizational environment and employ tools and techniques dictated by the technologies employed and industry best practices.

IAM/IEM-based Self-assessment

IAM is categorized as an expert-level assessment method that—by definition—does not lend itself to self-assessment.38 In fact, NSA designed IAM and IEM specifically for use by trained and certified third-party assessors.39, 40 However, NSA also recommends that government personnel obtain IAM training to better understand how they will be assessed using the methodology.41 Training organizational personnel in a standards-based risk assessment methodology provided a 35 percent decrease in resources expended for audit when the method was used for CSA in support of the audit function.42 Thus, while “collaborative” methods exist, it is certainly possible for organizational personnel to gain the same level of expertise in IAM and IEM as third-party assessors, i.e., to employ these expert-level assessment methods for CSA. In addition, NSA already allows tailoring all but specific portions of these methodologies to meet an organization’s requirements when used for third-party assessment.43, 44 It follows that IAM and IEM can be modified similarly to support CSA. Thus— given that self-assessors can be adequately trained—the remaining difference between IAM and IEM and the relative familiarity an assessor has with both assessment methods and the target system(s) certainly supports the advantages of this approach.45

Conclusion

While NIST provides a CSA framework, this framework has not been approved by NSA for use on NSS, and NSA does not provide an alternative. However, NSA developed IAM and IEM for third-party assessments that—when adapted through training and tailoring—provide DoD operational commanders a general framework for expert-level, risk-specific CSA. Before commanders can make use of these methodologies, however, IAM and IEM should be analyzed in more detail, and specific recommendations should be made for a tailored framework. An independent comparison and possible harmonization of the NIST and NSA methodologies should also be considered.

Endnotes

1 Department of Defense, DoDI 8500.2, Information Assurance (IA) Implementation, US Government Printing Office, USA, 2003

2 Department of Defense, DoDI 5200.40, DoD Information Technology Systems Certification and Accreditation Process (DITSCAP), US Government Printing Office, USA, 1997

3 Rogers, R., et al.; Security Assessment: Case Studies for Implementing the NSA IAM, Syngress, USA, 2004

4 Cunningham, B., et al.; Network Security Evaluation: Using the NSA IEM, Syngress, USA, 2005

5 US Congress, Federal Information Systems Management Act, 44 U. S. C. Sec. 3541, USA, 2002

6 Op. cit., DoDI 8500.2

7 Op. cit., US Congress, p. 59

8 Ibid.

9 Committee on National Security Systems, CNSSD 502, National Directive on Security of National Security Systems, US Government Printing Office, USA, 2004

10 Op. cit., DoDI 5200.40

11 Ibid.

12 Department of Defense, DoD 8510.1-M, Department of Defense Information Technology Certification and Accreditation Process (DITSCAP) Application Manual, US Government Printing Office, USA, 2000

13 Campbell, P. L.; J. E. Stamp; A Classification Scheme for Risk Assessment Methods, SAND2004-4233, Sandia National Laboratories, USA, 2004

14 Ibid.

15 Ibid., p. 15

16 Ibid.

17 Op. cit., Cunningham

18 Op. cit., Campbell

19 Ibid.

20 Op. cit., Cunningham

21 Ibid., p. 18

22 Alberts, C.; A. Dorofee; Managing Information Security Risks: The OCTAVE Approach, Addison-Wesley, USA, 2003

23 Jones, A.; D. Ashenden; Risk Management for Computer Security: Protecting Your Network and Information Assets, Elsevier, UK, 2005

24 Information Systems Security Association, GAISP Project Overview, www.issa.org/gaisp/_pdfs/overview.pdf

25 Nelson, M.; “A Framework for Information Security Governance: The Federal Perspective,” SecureNet Technologies, 2004, www.securenet-technologies. com/NIST_Framework.pdf

26 National Institute of Standards and Technology, NIST SP 800-53A (Draft), Guide for Assessing the Security Controls in Federal Information Systems, US Government Printing Office, USA, 2005

27 Op. cit., Cunningham

28 Saint-Germain, R.; “Information Security Management Best Practice Based on ISO/IEC 17799,” Information Management Journal, July/August 2005

29 Op. cit., Campbell

30 Op. cit., Rogers

31 Op. cit., Cunningham

32 Ibid.

33 Op. cit., Rogers

34 Op. cit., Cunningham

35 Op. cit., Rogers

36 Op. cit., Cunningham

37 Ibid.

38 Op. cit., Campbell

39 Op. cit., Rogers

40 Op. cit., Cunningham

41 Op. cit., Rogers

42 Doughty, K.; J. O’Driscoll; “Information Technology Auditing and Facilitated Control Self-assurance,” Information Systems Control Journal, vol. 4, 2002

43 Op. cit., Rogers

44 Op. cit., Cunningham

45 Op. cit., Campbell

Bryan S. Cline, CISA, CISM, CISSP-ISSEP
is the technical director for information assurance services at Ocean Systems Engineering Corporation—an Apogen company—in Stafford, Virginia, USA. He has more than 25 years of experience in information systems, 10 years of which were in information systems security management and engineering in the US Department of Defense and North Atlantic Treaty Organisation. Cline currently specializes in the certification and accreditation of tactical IT systems and is pursuing a doctorate in information systems with a concentration in information assurance policy from the University of Fairfax (Virginia, USA).


Information Systems Control Journal, formerly the IS Audit & Control Journal, is published by the ISACA. Membership in the association, a voluntary organization of persons interested in information systems (IS) auditing, control and security, entitles one to receive an annual subscription to the Information Systems Control Journal.

Opinions expressed in the Information Systems Control Journal represent the views of the authors and advertisers. They may differ from policies and official statements of the Information Systems Audit and Control Association and/or the IT Governance Institute® and their committees, and from opinions endorsed by authors' employers, or the editors of this Journal. Information Systems Control Journal does not attest to the originality of authors' content.

Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, Mass. 01970, to photocopy articles owned by the Information Systems Audit and Control Association Inc., for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.