ISACA Journal
Volume 5, 2,016 

Features 

Cyberattacks—The Instability of Security and Control Knowledge 

Jeimy J. Cano, Ph.D., CFE 

In a world of accelerating change that is moving toward more digital tendencies, security and control practices have become an ongoing exercise in reinvention. Generating value, positively surprising the customer and periodically creating discontinuities are the business mantras that dominate the available literature on management in the 21st century. These mantras cannot be ignored by information security and cyber security executives.

It has been said that “There are two kinds of companies: those who have suffered a cyberattack and those who have not realized it yet,”1 which is a reflection that illustrates the paradox of boards that are concerned with cyber security in their organization while not truly understanding it. These companies tend to place excessive trust in existing security and control practices and standards.2

Efforts to achieve an increasing understanding of the patterns that attackers follow concentrate on the tactical and operational dynamic of the company, particularly on the area of technological security, but company management continues to lag behind in addressing the implications and impacts on the business dynamic. This reality contrasts with the reports that demonstrate the need for companies to adopt a vigilant, proactive posture, at a strategic level, to create a resilient environment.3, 4

Developing a vision of cyber security is not easy5 because putting basic security and control measures in place (e.g., isolation, access control, encryption, authentication and continuous monitoring) is not enough. Basic security and control measures may be insufficient because many of the actions of attackers do not focus on harming technologies, but rather, make slight changes to parameters without affecting overall functionality, leaving the smallest possible traces of their presence and increasing the invisibility of their actions.

Added to this challenge is the constant need to increase the digitalization of both company operations and products, which creates a new analytical environment for security and cyber security professionals. Companies are required to have greater contact and be more connected with their customers.6 Consequently, connectivity serves as a magnet for emerging attacks, given that the interfaces create bridges connecting digitalized worlds with nonhomogeneous, uncertain protection practices that can give rise to unknown operational and economic impacts. Because of this connectivity, it is essential to update knowledge about information security practices in light of a digitally modified and hyperconnected world.

Standardized Knowledge for a Known World

By questioning known security and control models, the digital world establishes the need to address emerging concepts such as resilience, survival and reinvention, which are words and challenges that should motivate the redefinition of known practices and their adaptation to new demands of the current environment.

Looking at the recently declassified document7 on security and control by the US Department of Defense (DoD), it is apparent that the existing standards followed a concrete, focused route to maintain trust in the technological infrastructure.

Nevertheless, other areas of protection have been neglected, such as the emission of frequencies by electronic devices and connecting cables, which are generally assumed to be secure after they are installed and set up. It is possible that these areas of protection may regain their former relevance given that recent attacks on mobile devices and wireless technologies re-create the types of activity that, in the 1970s, posed a risk in military installations.

The aforementioned DoD document defines the problem of security from three points of view:

  • Security is neither unique nor particular to a computerized system or its configuration; security applies to a wide spectrum of computational technologies.
  • Users must protect themselves from interference from the outside or from the system itself to preserve the integrity of the system’s data and programs.
  • Classified information requires that hardware and software suppliers build in additional levels of protection for those machines operating in secure environments.8

These three perspectives, which confirm that physical security measures are not enough to ensure a minimum level of penetration of computerized systems, establish the rules that make it harder for a third party to compromise computerized environments that are physically distributed and share resources.

To this day, many companies use these guidelines as the basis for controls and verifications. This use has resulted in multiple checklists that are executed systematically and validated in relation to the evidence required, which indicates their suitable application and functioning in accordance with standards.9

One notable practice in the DoD document is known as “assurance against unforeseen conditions,” which is conceptualized as the fail-safe approach.10, 11 This practice seeks a known answer in the face of unexpected events with the aim of maintaining a controlled failure scenario that responds to the specifications of the programming and design environment of the computational solution.

The fail-safe approach should be a natural process of any digital system design, given that the inevitability of a failure is the constant that motivates the pursuit of a computational development that is resistant to failure. In the event of failure, a fail-safe approach enables companies to have clear knowledge of the possible responses and understand the condition and state in which to assume and conclude the execution of a program or service.

The employment of audit logs and their functionality, whose objective is focused on “verifying that the system functioned correctly and, more importantly, that it is being used properly,”12 establishes a monitoring and control reading that:

  • Fully identifies the user within the system (name, identification and access terminal)
  • Records all operator functions, including the name and the function that has been used
  • Records unauthorized access to files, including the username, the terminal and the program identification
  • Includes special functionalities of the system, e.g., password generation, changes to data classification, modification of security parameters and transaction records
  • Records program reboots, internal check failures, execution of privileged programs without authorization, manual operations and environmental failures13

Audit logs are generally required when some adverse condition occurs, because they may help to explain what happened. Audit logs are not used to anticipate behavior in the system, improve the system’s functioning or drive key changes that prevent future situations that might affect proper functioning.

For more than 40 years, the lessons learned from the implementation of these practices have been capitalized on by both information security professionals and IT auditors configuring general known technology controls. These controls continue to play a part in how a security justification is made to top management, because it informs them of the effectiveness of security efforts.

Knowledge for a Volatile, Uncertain, Complex and Ambiguous World

Does everything learned and implemented in the field of security and control since 1970 afford peace of mind to companies today, which are in a volatile, uncertain, complex and ambiguous world? It is possible that there is no definite answer to this question given the inherent challenge between the cost-effectiveness of control versus the probability and impact of failure to achieve a reasonable position of efficient investment in alternative mechanisms of security.

This inherent tension in the face of unstable reality and the unexpected moves of attackers, who at any moment can overcome existing protection mechanisms, should be the motivation for changing established ideas about information protection. This tension should encourage reflections that analyze what is known and practiced with what is unknown and yet to be experienced.

This evolution means going from the known to the unknown and, ultimately, to the uncertain, where flexibility reigns.14 Thus, traditional tools such as cost-benefit analysis, linear programming, decision trees, stochastic models, and insurance and coverage lack the variety required to take into account the variability of an environment that is increasingly driven by complexity and chaos. In the status quo, security and cyber security professionals must seek new approaches that favor the construction of higher-order-thinking competencies that are assisted by scenario planning, systems thinking, complexity theory, deliberate errors and strategic intuition, among other techniques, that will enable these professionals to recognize patterns and possibilities that are in the environment. Knowing these patterns can help practitioners act proactively in the face of the inevitability of failures.

It is for this reason that recently released technologies have the ability to learn and reconfigure the rules for acting, validating the strength of the controls and comparing their effectiveness. The wisdom of the analyst, the unexpected actions of the attacker and these technological advances make it possible to think of the reality of failure.

Consequently, the effectiveness indicators of information technology security as seen through proven facts, feedback and root-cause analysis should be accompanied by other indicators oriented toward capacities to be developed, anticipated errors and ongoing learning. This combination approach makes decision making a dynamic space for understanding the attack scenario and helps in developing systemic thinking15 that is outside the box, similar to the way an attacker thinks.

Value Realization of a Cyberattack

A cyberattack, as figure 1 illustrates, is an exercise in the creation and exploitation of vulnerabilities that are occurring between the physical and the digital space. Considering the valuable information that is available online; the open, interconnected digital networks; and the destabilizing capabilities of the attackers, these attacks are able to produce changes in the perception of the company environment (uncertainty and instability) assisted by skillful hiding of technological discontinuities, which are difficult to detect and to combat given their invisibility and the low availability of traces, many of them inconsistent, modified or nonexistent.

A cyberattack confirms the fragmented responsibility that can be seen when technological convergence between information technology and operations technology is based on a digitally modified product. Both those in charge of maintaining control of industrial or manufacturing process (i.e., those who ensure that operational failures do not affect the outside world) and information security professionals (i.e., those who are focused on avoiding the compromise of internal equipment from external entities) do not share a new domain of protection named digital resilience.

Digital resilience is a practice that combines the best of operative discipline with resistance from perimeter protections so that, by articulating people, processes, technologies and the regulatory framework, it is able to create a joint force that defends the value of the business processes, by creating and developing a culture of prevention and anticipation in the face of the unexpected and uncertain.16

While a cyberattacker has a partial view of the defense and protective strategy, the effectiveness of an attack will make evident the lack of communication between areas, the weakness of the available interfaces, the inexperience of management, the misalignment of the strategic objectives, and, particularly, the absence of lessons learned based on similar conditions in the infrastructure or information that is important to the organization.

If it is accepted that high-level company executives are more concerned about internal attacks, external security breaches and misinformed employees and that their priorities do not align with the priorities of security professionals (e.g., external attacks, security in the cloud and custom-made malware), companies are creating an environment ideal for malicious acts. This disparity creates a corporate context of confusion and misguided acts, which could obscure the aims of the attack and give rise to containment and eradication procedures that compromise the evidence of the attack.

The value of the cyberattack is not the adverse condition that can be achieved during the attack, but the loss of attention that the attack causes, enabling the attackers to conduct the real intent of their action: destabilize the organization and alter its position in the context of its business. This goal affects the enterprise’s business risk appetite and increases distrust of executive assurance against residual risk.17

The digital ecosystem that benefits from the aforementioned adverse digital action gains value, i.e., creating instability and innovation for the attacker. This value contrasts with the technological investment made by the company in hopes of making itself more resistant to attacks.

Rethinking Security and Control

The consolidated knowledge of many years of practice and positive results in relation to protection and control practices have enabled a fundamental advancement of established assurance patterns that are available today to companies.18 However, this knowledge, when faced with the dynamics of the current environment, needs to be updated to establish a renewed view of security and the trust of executive bodies.

Although it is not possible to determine when an attack will occur or where the attack will originate, it is possible to know how to act given that the available technological security infrastructure will be actively monitoring the environment and the structured uncertainty19 in the environment.

The flow of information has increased and made it possible to know people, their tastes and their preferences better. In an era when the concept of the perimeter is increasingly fading; when mobility is the foundation of the digital, instantaneous society; and products and services are digitally modified, the subjects of security and control become part of the promise of value of those new goods.

For all of these reasons, cyberattacks have a dual effect on society, individuals and companies. One effect communicates the urgency of encouraging experimentation and deliberate errors to think in an asymmetric way, like an attacker does. The other effect involves rethinking the existing practices to increase the return on the lessons learned20 by using them as the basis for defending the company and its information.

Conclusion

Framing cyberattacks as a language of challenges and risk and knowing that it is necessary to rely on information protection fundamentals, infrastructure and productive processes requires openness and trust from the business community. This cooperation is necessary to construct a war scenario in which there are no reproaches for the past nor bets about the future. Likewise, the lessons learned from cyberattacks can be used to provide a shared view of the present, based on the knowledge of others, watchfulness over the environment, operative discipline and personal reflection.

Endnotes

1 Florio, L.; “La ciberseguridad, una necesidad que las empresas cada vez valoran más,” La Vanguardia, 25 April 2016, www.lavanguardia.com/economia/20160418/401196827086/ciberseguridad-empresa-russell-reynolds.html
2 Ibid.
3 Deloitte, Assesing Cyber Risk: Critical Questions for the Board and C-suites, 2016, https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Risk/gx-ers-assessing-cyber-risk.pdf
4 IIARF Research Report, Cybersecurity: What the Board of Directors Needs to Ask, ISACA and IIARF, 2014, www.isaca.org/knowledge-center/research/researchdeliverables/pages/cybersecurity-what-the-board-of-directors-needs-to-ask.aspx
5 Denning, P.; D. Denning; “Cybersecurity Is Harder Than Building Bridges,” American Scientist, vol. 104, no. 3, May 2016, p. 155, www.americanscientist.org/issues/pub/2016/3/cybersecurity-is-harder-than-building-bridges
6 Pitzer, D.; A. Girdner; Addressing and Managing Cyber Security Risk and Exposures in Process Control, Society of Petroleum Engineers Intelligent Conference and Exhibition, 1-3 April 2014, The Netherlands
7 Department of Defense; Security Controls for Computer Systems (U): Report of Defense Science Board Task Force on Computer Security, 11 February 1970, USA, http://seclab.cs.ucdavis.edu/projects/history/papers/ware70.pdf
8 Ibid.
9 Ibid.
10 Axelrod, C. W; Engineering Safe and Secure Software Systems, Artech House, USA, 2013.
11 Basin, D.; P. Schaller; M. Schlapfer; Applied Information Security: A Hands-on Approach, Springer, Germany, 2011
12 Op cit, US Department of Defense
13 Ibid.
14 Schoemaker, P.; Brilliant Mistakes: Finding Success on the Far Side of Failure, Wharton Digital Press, USA, 2011
15 The Coaching Room; “Thinking Globally Is A Skill That Can Be Taught,” Thecoachingroom.com, 22 January 2016, www.thecoachingroom.com.au/blog/thinking-globally-is-a-skill-that-can-be-taught
16 Kaplan, J.; T. Bailey; D. O’Halloran; A. Marcus; C. Rezek; Beyond Cybersecurity: Protecting Your Digital Business, Wiley, USA, 2015
17 Boville, J.; “Does Aligning Cybersecurity, Process Safety Approaches Reduce Risk?,” Pipeline and Gas Journal, vol. 243, no. 2, February 2016, https://pgjonline.com/2016/02/08/does-aligning-cybersecurity-process-safety-approaches-reduce-risk/
18 Myrvang, P.; T. Winther; “Top 10 Cybersecurity Vulnerabilities for Oil and Gas,” Pipeline and Gas Journal, vol. 243, no. 2, February 2016, https://pgjonline.com/2016/02/09/top-10-cybersecurity-vulnerabilities-for-oil-and-gas/
19 Charan, R.; The Attacker’s Advantage: Turning Uncertainty Into Breakthrough Opportunities, Perseus Books Group, USA, 2015
20 Birkinshaw, J.; M. Haas; “Increase Your Return on Failure,” Harvard Business Review, May 2016, https://hbr.org/2016/05/increase-your-return-on-failure

Jeimy J. Cano, Ph.D., CFE
Is a professor and researcher of the Information Technology, Telecommunications, Electronic Commerce Studies Group (GECTI) of the Law School at University of the Andes (Colombia) and is candidate for his second doctorate degree, this one in education from Saint Thomas University (Colombia).

 

Add Comments

Recent Comments

Opinions expressed in the ISACA Journal represent the views of the authors and advertisers. They may differ from policies and official statements of ISACA and from opinions endorsed by authors’ employers or the editors of the Journal. The ISACA Journal does not attest to the originality of authors’ content.