Security Labeling of IT Services Using a Rating Methodology 

 
Download Article Article in Digital Form

Subcontracting certain IT functions has been common over the last few decades (e.g., in development, housing, hosting and outsourcing), but the irruption of cloud computing has taken it to a new level. And, although some argue that cloud computing is only an evolution of outsourcing, it is, in fact, a new paradigm that is changing the approach to IT. Instead of something that organizations make themselves, IT is becoming a service they consume (similar to what happened with energy in the industrial revolution).

Security professionals have long been aware that subcontracting does not eliminate the IT risk that organizations face; in fact, with subcontracting, organizations lose control over the security measures implemented by service providers. Thus, security professionals have applied a recipe based on audits and certifications to build trust relationships with these service providers.

However, this approach must change in order to fit the new paradigm. Despite audits and certifications, users continue thinking that more transparency is needed:

Among the limiting factors [of cloud computing adoption], security and data ownership (both related to the ability to protect information assets) and factors related to legal issues, contracts and regulatory compliance topped the list. The fifth factor, information assurance, is significant because it is related to the transparency of cloud offerings and management’s ability to gain comfort that information is protected to the required degree.1

That is to say, although security professionals have been applying best practices and asking for security audits and certifications, these mechanisms have not been able to transmit the level of trust required by customers of cloud computing services.

Do Audits and Certification Really Fail in Providing Transparency?

Security audits and certifications are the foundations of trust-building between customers and providers, but they have some characteristics that oblige the development of further mechanisms:

  • Typical audit reports cannot be freely distributed; they are only for the parties involved (typically, the customer and the provider), which requires the provider to be audited by every (potential) customer.
  • Service Organization Control (SOC) reports can be made public, but then other issues appear: The criteria used by the auditor may or may not be relevant to the customers because they have been fixed by a third party. If the criteria are not relevant for the customer, the first point is applicable again.
  • Finally, regarding certifications, there is no certification for the security of services. What providers are certifying is their information security management system against the ISO/IEC 27001 standard. This certification has two issues:
    1. It does not say anything about the security measures implemented by the provider; it indicates only if the provider has an IS management system.
    2. It obliges the customer to understand the scope of the certification because it could be irrelevant for the service to which it wants to subscribe.

Of course, a provider that implements a certified information security management system (ISMS) follows best practices and adopts security measures following a risk management process, but the customer cannot derive the robustness of security measures that the provider has in place only from the certification. The certification provides only simple information: that the provider implements an ISMS following ISO/IEC 27001.

The tools that security professionals have for the moment require customers to be security specialists in order to understand the outcomes and do not provide comparable results.

Catherine Ashton, High Representative of the European Union for Foreign Affairs and Security Policy, has highlighted the same issue and invites the industry to “develop industry-led standards for companies’ performance on cybersecurity and improve the information available to the public by developing security labels or kite marks helping the consumer navigate the market.”2

A New Tool: Rating

Exploring the economic theory that explains the relationship between customers and providers may be useful while looking for new ways of building trust in the cloud services market. In fact, the security of cloud services (i.e., cybersecurity) faces a well-known problem: information asymmetry.

This concept was explained by George Akerlof in 19703 and refers to “decisions in transactions where one party has more or better information than the other,” which is exactly what happens in the cloud service market regarding security: The provider has better information about the security measures implemented than the customer.

Information asymmetry creates an imbalance of power that can sometimes cause the transactions to go awry. The most common problems that arise are adverse selection4 and moral hazard.5 The worst consequence that information asymmetry could have is the disappearance of the market.

Economists Michael Spence and Joseph E. Stiglitz analyze two primary solutions to this problem:

  • Signaling6—Signaling means that one party (in this case, the cloud provider) credibly conveys some information about itself to another party (the customer). This could sound a bit strange, but security professionals are very familiar with this kind of mechanism, called a security certification, which provides a seal that organizations can use to signal that they are compliant with some set of requirements.
  • Screening7—The screener (the one with less information; in this case, the customer) attempts to rectify the asymmetry by learning as much as it can about the provider. Again, it may sound strange, but security professionals use screening continuously; it is called an audit.

Once the underlying economic theory8 is understood, is there any other option to build up trusted relationships?

There are alternatives in the economic world. The same problem about information asymmetry that the cloud service market faces today was faced at the beginning of the debt markets, for example. In those days, investors were in a weak position in relation to companies asking for financing: They did not know how likely the debtors were to reimburse the credit. Thus, credit agencies began to rate debtors as a signaling mechanism.

Security Labels Using a Rating Methodology

Using these concepts from the economic world in the security field, a security labeling system for IT services can be implemented. This system could help (potential) customers to understand easily the security characteristics of the services in the market and analyze whether they fit their needs according to their risk profile (see figure 1)—in the same way that household appliances are rated by their energy consumption or cars are rated for their safety. The labeling system is also more efficient, because all the customers willing to subscribe to a service do not need to audit the security controls due to the approach “audit once, use many times” (in the way that FedRamp does9).

Figure 1

Rating security measures according to, for example, five levels (from A to E, A being the best) in the different dimensions of cybersecurity (confidentiality, integrity and availability [CIA]) could provide 125 different possible ratings (5 levels, 3 dimensions) to help customers choose those services that best fit their needs. This is unlike certification, which divides all the providers into two groups: those that are certified and those that are not.

Figure 2The rating methodology outcome should be a set of three letters indicating the soundness of the security measures implemented by the service provider in each specific service in the mentioned security dimensions. For example, a rating of BDC means (figure 2):

  • A rating of B in the confidentiality dimension
  • A rating of D in the integrity dimension
  • A rating of C in the availability dimension

In the example, the service pays more attention to confidentiality aspects, but it is not suitable for those with high availability requirements (who should look for A or B ratings in the third dimension).

Rating (for services, not for providers) gives a relative value that can be understood as a forecast about technical solvency of the vendor in relation to its security and resiliency. In this way, services with a better rating would have a lower probability of suffering an incident that affects service level agreements in a significant way.

Building the Rating System

The first step to building the rating system is to create a system to assign a level (from A to E) to the security measures implemented by the provider. To do so, two tasks should be carried out:

  1. Elaborate an inventory of security measures to be evaluated.This task should be supported by generally accepted standards, controls and frameworks (e.g., National Institute of Standards and Technology [NIST] standards, ISO 27002, the Payment Card Industry Data Security Standard [PCI DSS], ITIL, the European Union Data Privacy Directive).
  2. Define the five levels for each security measure. This definition has two components: (i) Security measures that are processes are rated according to their maturity level (similar to the COBIT Process Assessment Model); and (ii) Security measures that depend on technology (e.g., security configurations, tools) are rated according to their robustness (e.g., a password of 12 characters is stronger than a password of six characters, and a password of 12 characters is stronger if it requires a combination of letters, numbers and special characters).

Figure 3Besides security controls maturity (analyzed previously), there are other elements that should be considered in the rating methodology to build up the security label because they contribute to trust relationships (figure 3):

  • Vendor reliability—Information related to the strategy of the service provider including business plans, financial stability, management bodies, service road map and qualification of employees
  • Resilience—The ability of the provider to recover in the case of incidents

Conclusions

Security audits and certifications are necessary, but not sufficient, conditions to build up trust relationships in cyberspace. Security professionals must be a step ahead and propose new ways to evaluate the security and resilience of IT services and be prepared for new scenarios such as those caused by cloud computing and cybersecurity. In this situation, security labeling based on a rating methodology could, for example, help users understand the risk they face when using IT services, compare different options and be more efficient in the procurement processes.

Endnotes

1 ISACA and Cloud Security Alliance, “2012 Cloud Computing Market Maturity Study Results,” USA, 2012
2 High Representative of the European Union for Foreign Affairs and Security Policy, “Cybersecurity Strategy of the European Union: An Open, Safe and Secure Cyberspace,” JOIN (2013) 1 final, Belgium, 7 February 2013
3 Akerlof, George; The Market for Lemons: Quality Uncertainty and the Market Mechanism, 1970
4 Adverse selection “refers to a market process in which undesired results occur when buyers and sellers have asymmetric information (access to different information); the ‘bad’ products or services are more likely to be selected.” Chandler, Seth J.; “Adverse Selection,” The Wolfram Demonstrations Project
5 A moral hazard is a “situation where a party will have a tendency to take risks because the costs that could incur will not be felt by the party taking the risk. In other words, it is a tendency to be more willing to take a risk knowing that the potential costs or burdens of taking such risk will be borne, in whole or in part, by others.” Dembe, Allard E.; Leslie I. Boden; “Moral Hazard: A Question of Morality?,” New Solutions, 10(3), 2000, p. 257-279
6 Spence, Michael; “Job Market Signaling,” The Quarterly Journal of Economics, vol. 87, no. 3, 1973
7 In screening, “[t]he underinformed party can induce the other party to reveal their information. They can provide a menu of choices in such a way that the choice depends on the private information of the other party.” Stiglitz, Joseph E.; “There Is No Invisible Hand,” The Guardian Comment, UK, 20 December 2002
8 In 2001, the Nobel Prize in Economics was awarded to George Akerlof, Michael Spence and Joseph E. Stiglitz for their “analyses of markets with asymmetric information.”
9 Federal Risk and Authorization Management Program, www.fedramp.gov

Antonio Ramos, CISA, CISM, CRISC, CCSK, is the founder of Leet Security, the first security rating agency in the European Union and president of the ISACA Madrid Chapter. With more than 14 years of experience, Ramos has specialized in security governance, strategic planning in critical infrastructure protection, cybersecurity and cloud computing. He can be reached at antonio.ramos@leetsecurity.com.


Enjoying this article? To read the most current ISACA Journal articles, become a member or subscribe to the Journal.

The ISACA Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to the ISACA Journal.

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT Governance Institute and their committees, and from opinions endorsed by authors’ employers, or the editors of this Journal. ISACA Journal does not attest to the originality of authors’ content.

© 2013 ISACA. All rights reserved.

Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, MA 01970, to photocopy articles owned by ISACA, for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.