The Changing Face of Cybersecurity 

Download Article Article in Digital Form

In today’s environment, it is commonplace for business transactions—everything from home shopping to multibillion-dollar deals—to take place over the Internet. But, while the Internet has developed rapidly as a channel for business, security on the Internet has lagged. The Internet has a well-earned reputation as a hostile environment, and the growth of organised cybercrime is evidence that there is not enough being done to manage the risk. In 2004, Butler Lampson noted:

After thirty years of work on computer security, why are almost all the systems in service today extremely vulnerable to attack? The main reason is that security is expensive to set up and a nuisance to run, so people judge from experience how little of it they can get away with. Since there’s been little damage, people decide that they don’t need much security. In addition, setting it up is so complicated that it’s hardly ever done right. While we await a catastrophe, simpler setup is the most important step toward better security.

In a distributed system with no central management like the Internet, security requires a clear story about who is trusted for each step in establishing it, and why. The basic tool for telling this story is the ‘speaks for’ relation between principals that describes how authority is delegated, that is, who trusts whom. The idea is simple, and it explains what’s going on in any system I know. The many different ways of encoding this relation often make it hard to see the underlying order.

Over the last 20 years, there has been immense growth in the number of computing and network services, enabling transactions to be undertaken by the smallest businesses across a global marketplace. At the same time, there has been a growing community of individuals who have sought to exploit the vulnerabilities of network devices, computer systems and applications.

IT systems have proved over the last 20 years to be less than perfect, requiring compensating controls to address problems when they arise. Vendors continually release tactical patches and upgrades to fix problems, but hackers with knowledge, skills and capability have developed and released exploits and easy-to-use tools to enable even the least technical users to become adversaries.

At the user level, the approach to securing government and business systems has seen little change, with a continuing model of perimeter protection through firewalls and soft internal networks; limited, if any, segregation of applications; and the continuing use of ineffective security mechanisms such as password authentication. Even the more advanced security mechanisms such as encryption and two-factor authentication have not lived up to their promises, their fragility exposed through incidents such as the RSA breach2 and the failure of MD5.3 Vendors have delivered sophisticated monitoring capabilities to alert operators should unauthorised changes or accesses be attempted, but few user organisations have the ability to configure the equipment adequately enough to deliver effective security. Organisations such as the International Organization for Standardization (ISO) and the PCI Security Standards Council have created and driven the adoption of security standards, but the levels of penetration, effectiveness of implementation and even their suitability to protect against sustained attack are all questionable.

For those organisations with multinational locations, local legislation often causes variances to the security model, potentially opening holes in security and creating attack vectors in the more secure parts of the network. Data are transferred and modified across multiple systems, which may result in discrepancies and possible errors. Outsourcing of services or data storage facilities is often done with little due diligence on provider security capability and hiring policies. Trying to manage records consistently in a dispersed environment (which may or may not be within the organisation’s control) can be a nightmare for the security manager.

The ability to evolve a digital society and to gain the many promised benefits depends in large part on a widespread confidence in the fabric of cyberspace. The denial-of-service attacks against Paypal and (2010), CNN (2008), Twitter (2009), the Australian Parliament (2010) and US oil firms (2011) may or may not have been successful in damaging the target, and indeed may have been used for publicity by Internet security companies, but they have increased public concern over security in cyberspace. The more serious intrusion attacks against Sony Corp. (in which credit card details of thousands of gamers were released) and RSA (in which highly sensitive information relating to its secure two-factor authentication device was compromised) demonstrate that the Internet is increasingly a very dangerous place to operate.

Cybersecurity as a national strategy and plan needs to deliver not only better security in government and business services, but a fundamental shift in the safety of the electronic environment in which they operate.4 Over the last 20 years, the IT community has failed to deliver a data utility that has the level of trust common in other utilities. What can the IT community do to turn around the current obstacles to developing an effective digital society?

What Has Changed?

So what has changed on the Internet? The answer, of course, is everything—business activities, information technology, the communications environment and the threat landscape. Today, vendors and attackers have become embroiled in a cyber arms race, and users are the losers. There are regular reports of government and business systems being infiltrated and data breaches in government departments. Consumers’ computers, wireless modems and, increasingly, cell phones are being subverted, and even the basic fabric of cyberspace is under attack, with nations demonstrating their ability to take control of the Internet.5

In the last five years, there have been a number of fundamental shifts in technology and its use that require equally fundamental shifts in attitudes towards security. Information technology has evolved from purely a means of systems automation into an essential characteristic of society: cyberspace. The kind of quality, reliability and availability that has traditionally been associated only with power and water utilities is now essential for the technology used to deliver government and business services running in cyberspace.

Technology is changing rapidly, and another fundamental shift is occurring with the emergence of cloud computing. Cloud computing enables individuals and organisations to access application services and data from anywhere via a web interface; it is essentially an application service provider model of delivery with attitude. The economies possible through use of cloud, rather than internal IT solutions, will inevitably see the majority of businesses and, increasingly, governments running in the cloud within the next five years. This substantially changes the ways in which organisations can affect and manage both their IT function and security in their systems.

Today’s security standards were developed in a world in which computers were subject to fraud and other criminal activities by individuals inside and, in some cases, outside the organisation. However, this has changed in the last five years with the rapid increase in organised cybercrime through the emergence of robot networks (botnets), which enable criminal activity to be conducted on an unprecedented global scale and can also be used as force multipliers to deliver massive denial-of-service attacks on targeted businesses—at a level at which nations are increasingly at risk of being cut off from the global Internet.

Unfortunately, the capability of national police forces to stop global cybercrime is developing much more slowly than the technical abilities of cybercriminals. Cybercrime is now arguably a bigger issue than illegal drugs. The adoption of the Council of Europe Convention on Cybercrime is setting the scene for a global response to cybercrime, and there are signs that police forces globally are working together. However, much more needs to be done to develop the concept of a global jurisdiction before an adequately agile response to cybercrime can be developed.

In what is increasingly recognised as a Hobbesian6 world, government systems are under relentless attacks from other nations seeking to gain national intelligence and industrial information. While China has been publicly accused of such activity,7 many nations are known to possess such a capability. Further, offensive use of the Internet by nation states is not limited to the intelligence sector. The paradigm, based on the movie War Games, of adolescents breaking into defence systems and playing war games has given way to credible evidence in cases, such as the one in Estonia,8 of state-sponsored attacks by professional armies of cyberwarriors within or sponsored by the military. Indeed, the US has created its own Cyber Corps9 and now considers a cyberattack as a standard component of a campaign.

Where to Go From Here?

Many of the shortcomings in technology and technology management discussed previously were recognised many years ago, but nationally, commercially and personally sensitive systems continue to be installed and operated with these shortcomings. Indeed, with improvements in technology and capability, organised attackers are much more easily able to cause disruption and fraud. There are a number of specific steps that can be taken to improve the situation and redress somewhat the woeful state of affairs in which the information security industry finds itself.

Given the number of vulnerabilities that exist in new applications (as demonstrated by the numerous security patches that are issued by major software vendors), the plethora of tools available to cause mayhem across organisations connected to the Internet, and the growing knowledge and capability of the user community, government and industry are avoiding major incidents through luck rather than good judgement. Can government and industry continue absorbing these threats to their business model, knowing that, with the deployment of the Australian National Broadband Network, for example, and Internet Protocol version 6 (IPv6), the frequency and severity of issues will only increase? Higher bandwidth and increased computing power may extend the ferocity of any concerted attack, and every IP-enabled device could become a potential threat—not just home computers, but also household appliances, cars and mobile phones. In cyberspace, one’s refrigerator could be a hostile agent. Following the Irish Republican Army (IRA)’s bombing of the Brighton Grand Hotel in 1984, the IRA released a statement that said ‘...we only have to be lucky once; you will have to be lucky always’. So it is with cyberspace.

Understanding the Threat Source
Clearly, understanding the source of any threat and the likelihood of the threat being a danger to an organisation’s business interests is a critical first step in building a cybersecurity strategy. Steven Bucci describes the threat actors as shown in figure 1.10

Figure 1

Steven Bucci shows that while cyberthreats are changing from individual hackers through organised crime and terrorist-based attacks to national- or state-sponsored cyberattacks, the level of danger is correspondingly increasing.11 Thus, while individuals may cause mayhem, it has been largely unsustainable and fairly contained. Now an attack may result in widespread destruction and an ongoing undermining of state sovereignty.

This follows the accepted crime model: Is the value of the target sufficient to warrant an access attempt, what is the likelihood of getting caught, and how difficult or expensive is the undertaking? The profiles of each of these aspects determines the demographic of a likely attacker. Similarly, changing the parameters of each of these aspects affects the likelihood of being targeted—bearing in mind that a state-sponsored cyberattack is likely to have extensive resources.

Avoiding Vulnerabilities
The problems in cyberspace do not come from threats alone, but from the combination of threats and vulnerabilities. The vulnerabilities are neither more nor less than byproducts of a low or non-existent level of quality in personnel and products used to provide cybersecurity.

It is no longer acceptable for professionals, tradespeople, products and services that are critical to the success of cyberspace to operate caveat emptor. Professions such as law, medicine and psychology are controlled through rigorous professional standards, while other professions such as accountancy have established institutes that award chartered qualifications. In some countries, a recognised qualification is mandatory for an individual to register as a tradesperson, such as an electrician or a builder. The establishment of cybersecurity as a profession and a trade is well overdue.

Countries are increasingly recognising the requirement for cyberspace to be built upon a reliable infrastructure. In the UK, to ensure telecommunications service providers deliver an IP infrastructure at the same level of quality as it has its analog networks, an incentive model of public-private partnership for the delivery of infrastructure services to government departments based on the next generation security standard12 has been established. Many countries, including Australia, have released Internet service provider (ISP) codes of practice,13 which incorporate requirements for ISPs to take some responsibility for the content on their connections.

Building trustworthy software (i.e., software without exploitable weaknesses) continues to be a challenge. While there are theories, models and techniques for developing secure architectures and coding secure software, this has been ineffective in driving the IT industry. In part, the academic community has been blind to the need for software security to be a core element of any computer security curriculum, and, in part, vendors have been too ready to build new systems on insecure foundations, patching holes rather than rebuilding fundamentally secure systems. There is no easy solution to this problem, although linking research funding to programmes that meet basic cyberspace requirements would be a good start. Also, continuing the use of selective purchasing by governments will drive more responsible academic and vendor behaviours.

While avoiding vulnerabilities is the ‘holy grail’, the reality is that vendors continue to create vulnerabilities. Adopting a security strategy that focuses on situational awareness is now an important foundation for understanding threats. Organisations must stay informed about attack trends and specific-to-them security exposures, and be able to react to these. In addition, testing systems against known security exposures provides a defence-in-depth approach to managing such vulnerabilities. A simple penetration test of an organisation’s external systems will reveal configuration issues or unapplied patches. Hardening systems is another technique used to limit exposure from vendor-delivered vulnerabilities, by closing unused connections and checking for password vulnerabilities, dormant accounts and other weaknesses that may be exploited by any number of readily available attack tools.

For business, however, cyberspace is just a means by which business can be conducted, not an end in itself. Businesses are evolving rapidly to ensure that they remain competitive and are able to meet customer demand, increasingly through strategic alliances. One of the most effective quality controls that can be put in place is to conduct ongoing, high-quality due-diligence reviews on organisations with which information is shared, to which services are outsourced or with which information is sourced. As described by Peter Keen in 2002:

Business process outsourcing (BPO) is the investment strategy for sourcing best practice process capabilities end to end along business value chains: the customer relationship chain, supply chain, organizational productivity chain, and product and service innovation chain. It is intensively collaborative because it rests on meshing the BPO client’s skills, technology base and processes with the BPO provider’s distinctive offerings. It is additive —strengthening capabilities along the value chain.14

More often than not, the technology used by partner organisations exists in other countries with complex legal arrangements and data ownership laws—and possibly an implicit or explicit legal act, such as the US Patriot Act.

While the new threats to cyberspace come from outside, this does not mean that insider attacks have ceased. Insufficient attention is often given to carrying out background checks on staff and contractors as they are hired and during their employment, validating the quality of security contractors installing equipment, continually testing implemented controls and reviewing the organisational risk profile. Simple actions such as these are a start, but much more needs to be done to ensure that users can rely on their cyberservices to the same extent as they rely on power and water. National cybersecurity strategies will affect some of the improvements needed to deliver an adequate level of quality in cyberspace, but this will need to be supported by strong industry and consumer support through discretionary purchases and employment.

Understanding the Business
While work continues on developing and fielding the foundations of a secure cyberspace environment, digital societies are emerging that are less than perfect. For these societies to survive, and possibly even thrive, there must be a clear and absolute understanding of the risk and development of management approaches that mitigate it. At an individual business level, traditional security solutions are often applied with little understanding of the business needs and business information provenance and flow. Without this, it is difficult to properly assess the risk, and without a good risk profile, it is hard to build effective solutions that obviate the need for complex and unwieldy security controls.

Aligning the business needs, information flows and security architecture requires the cybersecurity professional to understand:

  • The business, the strategic objectives, the market, the stakeholders and what information is used and shared
  • The business information flows, relationships and dependencies
  • The value of the business information in financial, strategic and operational terms
  • The impact of failure in information management—corruption, loss or disclosure—and failure in the service provided
  • What it takes to recover to a manageable position in the event of failure, and (to understand) where that is not possible
  • The relationships inside and outside of the business, and how failures in one area can impact other areas

With this understanding, the cybersecurity professional can start to develop the risk profile for the business and derive options for establishing a security model both from a budgetary and architecture perspective.

Architecting the Solution
By understanding the business and the operational environment, it is possible to develop a security model that is effective and sustainable. Generic security models have been developed over the years based on physical security controls to protect information and systems that are housed in a single or defined location as well as an electronic perimeter to protect systems that are complete in themselves; however, these models no longer apply. With the advent of virtual companies that exist predominantly on the Internet, with staff members working in a variety of locations, and with information on mobile devices and in the cloud (and where is that?), traditional models can protect only a fraction of the business information. Most of the security expenditure today is focused on some form of compliance and not on protecting the critical business information.

Technology needs to be architected to reduce the propensity for attacks in cyberspace. This will require a fundamental rethink of the way services are provided to the network. ‘The old walled-garden approach to computer security with its firewalls and intranets seems out of step’.15

The Open Group’s Jericho Forum16 advocates an approach more aligned with cyberspace and one that addresses the de-perimeterisation of organisational systems. The Jericho model is based on establishing trusted paths between partner organisations, improving authentication of users (human and machine) and information, and improving access controls at a more fundamental level of information. This enables business to collaborate with more confidence.

By also creating an architecture that is resilient and self-healing, the effects of an attack on a single target can be minimised. This approach was first advocated in 1989 as the Digital Distributed System Security Architecture (DDSSA):

The architecture covers user and system authentication, mandatory and discretionary security, secure initialisation and loading, and delegation in a general-purpose computing environment of heterogeneous systems where there are no central authorities, no global trust, and no central controls. The architecture prescribes a framework for all applications and operating systems currently available or to be developed. Because the distributed system is an open OSI environment, where functional interoperability only requires compliance with selected protocols needed by a given application, the architecture must be designed to securely support systems that do not implement or use any of the security services, while providing extensive additional security capabilities for those systems that choose to implement the architecture.17

The initial thinking around DDSSA was followed by the development of the concepts of survivable networks,18 which can continue, albeit in a reduced manner, to deliver critical services when under attack. Robust and reliable systems based on these concepts have yet to emerge in the product space and be widely deployed, although the emergence of a resilient network standard19 should go some way towards addressing these issues.

In cyberspace, the focus must be on the protection of information. Information exists either in stored form or in transit across cyberspace, and in either form, it can be stolen with no discernible change. Once stolen, it can be altered or re-sourced and used for repeat frauds. Access controls may reduce this risk in a private setting, but not when information is placed in the public domain. Digital rights management (DRM) technology has been developed to meet this challenge by enabling controls to be applied to information items that prevent changing, copying, printing, forwarding or executing (applications). Organisations such as the Electronic Frontier Foundation (EFF) see this as an infringement of the individual’s rights to access and share information freely, so DRM has not had widespread adoption.

In cyberspace, planning for business continuity continues to be important. As systems become more reliable, businesses become more dependent on them and the impact of failure increases dramatically. In the 2011 Christchurch (New Zealand) earthquake, an estimated 25 percent of buildings were or had to be destroyed, requiring wholesale relocation, and many businesses were unable to recover material from the buildings before they were demolished. 20 While the use of cloud services would reduce exposure to such localised events, the cloud itself is not a panacea for all ills, as demonstrated by’s cloud failure in 2011.21 In fact, the use of cloud services brings with it major issues relating to data ownership and the ability to recover data from clouds in the event of service termination. A successful business continuity plan not only ensures that business activity is able to continue and business data and systems are recovered, it also includes damage control over customer relationships.


Although the scale of activity involving highly sensitive transactions over the Internet has increased dramatically over the last 20 years, there has been very little in terms of step change in the security industry. Vulnerabilities that were identified and exploited in the early 1990s remain and continue to be exploited to greater effect in the 2010s. There has been slow uptake of the security architectures that could address these vulnerabilities, while the improvement in technology and communications has made it easier to carry out attacks.

There is no single solution or panacea to the issues of cybersecurity, nor should there be. Each organisation should assess what its needs are, how it intends to conduct its business activities and what the risks are to that process. There are a plethora of highly capable solutions that can then be implemented and, more important, maintained.

Consumers in cyberspace, be they government, industry or society, continue to be more mobile, more demanding and less tolerant of failure. While there is an increased awareness of threats, often the increased adoption of security comes only after data breaches and system failures.

Security is not an adjunct or add-on to cyberspace; it is a fundamental aspect that must be considered alongside all other core functions to ensure that the business can meet its strategic objectives. Academics need to include cybersecurity as a core component of computer and information science to deliver a workforce properly prepared for its role in the digital society. The organisation’s leaders need to ensure that security architectures are developed to reflect the needs of the business, that the people it employs are certified professionals and tradespeople, and that the technology products and services that it uses are fit for purpose. For its part, government can usefully set the necessary standards and lead by example.

New governance models need to be developed that provide a consistent and effective basis for trust in a business process co-sourcing environment, and should ensure the existence of testing, monitoring and business continuity.

Security technology continues to be complex and unwieldy, and not well aligned with consumer needs. Having to remember multiple IDs and complex passwords is a major inconvenience and a cause of many security issues. Posting personal information to public sites continues to be a contributing factor to identity theft. Firewalls protect what information is left behind inside the corporate electronic perimeter, but do little to protect the vast amount of business-sensitive information outside. Intrusion detection systems detect yesterday’s problems, but not tomorrow’s problems. Security models, architectures and technologies need to reflect these concerns.

Multiple activities within the business do not mean that there should be multiple security architectures to support them. Having a single, consistent and persistent approach that is proven and flexible is much easier to maintain. However, this does require a good understanding of the business objectives, the operational market and the risks the business faces. Hence, the security model must recognise that protection of services and information in itself is not enough; the company must be able to recover from failure and continue to operate at a level expected by its operating partners and customers. And, it must be able to demonstrate that capability on a continuous basis.


1 Lampson, Butler W.; ‘Computer Security in the Real World’, IEEE Computer, 6 June 2004,
2 Williams, Alex; ‘RSA Breach: An Attack That Used a Social Media Boobytrap?’, ReadWrite Enterprise, 18 March 2011,
3 Wang, Xiaoyun; Hongbo Yu; ‘How to Break MD5 and Other Hash Functions’,
4 Banks, Lisa; ‘Attorney General Outlines Cyber Security Strategy’, CIO, 20 July 2011,
5 PBS, ‘China’s Internet “Hijacking” Creates Worries for Security Experts’, 26 November 2010,
6 Williams, Michael C.; ‘Hobbes and International Relations: A Reconsideration’, JSTOR, 1996,
7 Norton-Taylor, Richard; ‘Titan Rain—How Chinese Hackers Targeted Whitehall’, The Guardian, 4 September 2007,
8 As reported in various publications, including: Tiirmaa-Klaar, Heli; ‘Cyber Security Threats and Responses at Global, Nation-state, Industry and Individual Levels’, SciencesPo. Shackleford, Scott; ‘State Responsibility for Cyber Attacks: Competing Standards for a Growing Problem’, University of Cambridge. Greenberg, Andy; ‘When Cyber Terrorism Becomes State Censorship’,
9 US Department of Defense, Cyber Strategy,
10 Bucci, Steven; ‘The Confluence of Cyber Crime and Terrorism’, The Heritage Foundation, 12 June 2009,
11 Ibid.
12 ‘CESG IL2/IL3 Accreditation (224 & 334)’, 21 October 2010,
13 News4Us, ‘Australian ISP Code of Practice Now in Effect’, 2 December 2010,
14 Keen, Peter; ‘Business Process Outsourcing: Imperative, Historically Inevitable, Ready to Go’, 2004,
15 Lohr, Steve; ‘The Internet Firewall: R.I.P.?’, The New York Times Bits, 11 September 2007,
16 The Jericho Forum Vision,
17 Gasser, Morrie; Andy Goldstein; Charlie Kaufman; Butler Lampson; ‘The Digital Distributed System Security Architecture’, 1989,
18 Shore, Malcolm; Xianglin Deng; ‘Architecting Survivable Networks Using SABSA’, 23 September 2010,
20 Stevenson, Joanne; Hlekiwe Kachali; Zachary Whitman; Erica Seville; John Vargo; Thomas Wilson; ‘Preliminary Observations of the Impacts of the 22 February Earthquake on Organisations and the Economy’, 18 April 2011,
21 Gilbertson, Scott; ‘Lessons From a Cloud Failure: It’s Not Amazon, It’s You’, Wired, 25 April 2011,

Stewart Hayes has been involved in risk management and security practices for more than 25 years, providing specialist consultancy services in the Americas, Asia Pacific, Europe and the Middle East. Hayes can be reached at stewart.hayes@jakeman.

Malcolm Shore has an extensive IT background with more than 20 years of experience in security and risk management. He can be reached at malcolm.shore@

Miles Jakeman, Ph.D., is a business management specialist. As the Citadel Group Limited’s managing director, Jakeman has advised senior business leaders and government officials on a number of occasions, including representing countries in ministerial forums.

Enjoying this article? To read the most current ISACA Journal articles, become a member or subscribe to the Journal.

The ISACA Journal is published by ISACA. Membership in the association, a voluntary organization serving IT governance professionals, entitles one to receive an annual subscription to the ISACA Journal.

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and/or the IT Governance Institute and their committees, and from opinions endorsed by authors’ employers, or the editors of this Journal. ISACA Journal does not attest to the originality of authors’ content.

© 2012 ISACA. All rights reserved.

Instructors are permitted to photocopy isolated articles for noncommercial classroom use without fee. For other copying, reprint or republication, permission must be obtained in writing from the association. Where necessary, permission is granted by the copyright owners for those registered with the Copyright Clearance Center (CCC), 27 Congress St., Salem, MA 01970, to photocopy articles owned by ISACA, for a flat fee of US $2.50 per article plus 25¢ per page. Send payment to the CCC stating the ISSN (1526-7407), date, volume, and first and last page number of each article. Copying for other than personal use or internal reference, or of articles or columns not owned by the association without express permission of the association or the copyright owner is expressly prohibited.