Kim Maes, Steven De Haes, Ph.D., and Wim Van Grembergen, Ph.D.
By talking with some chief information officers, business sponsors and project managers in empirical research, we determined that most people in contemporary organizations know what a business case signifies. They have a rather good idea what should be included when developing such an investment document, and most of them understand its purpose. Moreover, they acknowledge that a business case may play an essential role in the decision making and ultimate benefits realization of the IT-enabled investment. Not surprisingly, it was found that a very large proportion of European companies develop some kind of a business case today.
However, many of these people could not give an adequate answer to how they were using such a business case after the investment was finally approved. Most of the business cases gathered dust on someone’s shelf or hard disk, and the realization of investment benefits was not tracked after the official launch of the end products and services. The practice of using a business case is characterized by the so-called knowing-doing gap. Organizations understand the importance of using a business case throughout the entire investment life cycle, but very few organizations are acting upon this important insight.
The advice coming out of our expert and case research is straightforward. First, organizations should start by clearly articulating what the investment is about and what it should realize. Second, developing and using a business case is not a solitary activity, and relevant stakeholders should be closely involved throughout the entire process of business case use. Finally, the maintenance of the business case during investment implementation requires an equal amount of attention in order to cope with escalations or capitalize on new, interesting opportunities. It should be noted that performing these kinds of business case practices will not be an easy task, say the experts, yet their effectiveness with regard to well-founded investment decision-making and investment success will be high.
John Snow, a character in the book and TV series Game of Thrones, realized that it was nonsense to have a wall dividing 2 cultural groups, with 1 group living south of the wall and 1 relegated to the north side. They were so different and yet so similar because of a shared goal: to survive the common enemy.
I believe in IT, we are in a similar situation. Now more than ever, diverse groups who share common goals but have different backgrounds, languages and cultures are required to cooperate. Unfortunately, our effort to improve the specialization and competence of IT professionals are building a frustrating wall.
This way of operating cannot work. If we as IT professionals continue to deepen our technical and methodological skills without finding ways of effective communication and cooperation with other social groups, we are doomed to fail both in governance of enterprise IT (GEIT) and in value creation for the enterprise and society.
How can we change the status quo? This problem, of course, is not new. Social scientists studying similar situations have come up with an interesting concept called boundary objects. Boundary objects examine how different communities use information in different ways. Boundary work and boundary objects are only a part of the solution. Other basic ingredients of the recipe for effective collaboration are a shared governance framework, business and IT eLeadership, and effective communication.
Ronald Zhao, Ph.D., Frank Bezzina, Ph.D., Pascal Lele, Ph.D., Simon Grima, Ph.D., Robert W. Klein, Ph.D., and Paul Kattuman, Ph.D.
Several comprehensive and systematic frameworks for risk management have recommended 3 lines of defense (LOD) for effective risk management and control. The focus of each of these 3 LODs is on governance, communication and human resources.
A previous Journal article
of ours illustrates an automatic dynamic system of audit and internal control interactions, which helps in instilling a culture wherein the insurers and their risk counterparts articulate risk management and financial information by evidence-based management processes, which are transparent and monitorable (as required by Solvency II and recommended by Basel III). It helps with the collection, treatment and consideration of data previously considered uncertain and hidden by all the lines of activity within a firm.
The IT investor relationship management (IRM) modules of online analytical processing (OLAP) client server interactions can be customized to automate the alignment in real time of the 3 LOD principles with Basel III and the 3 pillars of Solvency II. This is achieved by using:
- Cost accounting modules of the first line of defense (pillar 1 and pillar 2)—The first LOD is therefore organized by the device of procedures of predictive asset management (financial and human resources [HR]) based on the interactions of the finance function and the HR function to establish the frontline employees.
- Cost accounting modules of the second line of defense (pillar 2)—The second LOD is the function of operations management, which automatically provides independent oversight of enterprise risk management (ERM).
- Cost accounting modules of the third line of defense (pillar 3)—The third LOD is jointly assured by the reporting procedures of the HR function and the operations management function. They are modules of pillar 3/disclosure and transparency: Improve market discipline by facilitating comparisons and regulatory reporting requirements.
- These modules supply data by 3 reports (Cost saving, working conditions and psychosocial risk reports) that are particularly useful for updating the risk profile when the financial and social quality of the counterparty risk (including CCR or Counterparty Credit Risk) is deteriorating.
Our JournalOnline article assesses the impact of this technology on employment in each of the G20 countries.
Jaewon Lee, CISA, CGEIT, CRISC, CIA, CRMA
There is no doubt that the importance of IT risk management is increasing at this very moment. Across various industries, customers’ demand for a high availability of Internet services and products is increasing.
At the same time, cybercrime is getting more and more complicated, e.g., advanced persistent threats (APT). In addition, recent IT trends necessitate expanding the current IT threat horizon to areas such as big data, cloud computing, mobile banking, zero trust networks and agile . In conjunction with this, I see that lots of enterprises tend to perform intensive and comprehensive risk assessments to evaluate their IT environment.
However, any IT risk assessment thus far is based on the current risk formula (Risk = Likelihood x Impact). It typically does not consider IT’s nature and characteristics, such as IT software architectural aspects (i.e., complexity), various security requirements (i.e., confidentiality, integrity and availability), and the availability of solutions to respond to risk.
To account for these factors, I present an enhanced risk formula (Risk = Criticality [Likelihood x Vulnerability Scores (CVSS)] x Impact) in my recent Journal article
in order to calculate more effective and accurate risk ratings, particularly for software security vulnerabilities.
The benefits of using the enhanced formula, by using the CVSS calculation logic, are clear. They include criticality and risk ratings for software security vulnerabilities are calculated separately. Both IT characteristics and software architectural aspects are more clearly included. The method to estimate both criticality and risk ratings is consistent and repeatable. The enhanced risk formula is more objective. The availability of the solution to address software security vulnerabilities is considered.
These benefits help to estimate the criticality of software security vulnerabilities in the development environment, as the criticality is assessed before the potential impact is calculated.
Bostjan Delak, Ph.D., CISA, CIS, and Marko Bajec, Ph.D.
Several managers, owners and shareholders are asking the same questions daily:
- “Acquire and merge or do not acquire and merge?”
- “To outsource or not to outsource?”
- “To implement new technology or not to implement it?”
Performing qualitative and effective due diligence helps to reduce the associated risk and makes decision making easier, and there are several possible ways to do this.
From 1998 to 2008, we conducted more than 40 general IS due diligences and more than 25 initial IS due diligence engagements in Central and Eastern Europe. At that time there was a lack of the due diligence frameworks. We have studied different methodologies, approaches and standards (e.g., COBIT, ITIL, ISO/IEC 9000, ISO/IEC 27000, ISO/IEC 20000, BCM, ITADD, KnowledgeLeader) and through the years we have assembled a new framework for rapid due diligence (FISDD). With this framework, IS due diligence may be delivered in a reasonably short period of time. FISDD was successfully tested on several real merger and acquisition case studies in the financial industry. It can be used for different types of IS due diligence, including:
- Initial—should be conducted prior to the merger or acquisition of any organization
- General—used upon the request of shareholders or an organization’s top management to determine the status of an important part of IS or to complete status of IS within the organization
- Vendor—should be done before any outsourcing contract and should be repeated annually
- Technology—is performed on prospective technology investments.
IS due diligence is very similar to the general IS audit process. However, due to its inherent complexity it requires a framework for delivery. Our recent Journal article introduces the FISDD framework and delivers a timeline for using it.
Read Bostjan Delak and Marko Bajec’s recent ISACA Journal article:
“Conducting IS Due Diligence in a Structured Model Within a Short Period of Time,” ISACA Journal, volume 4, 2014.
Kevin Kobelsky, Ph.D., CISA, CA, CPA (Canada)
Five years ago I sat at a conference with leading practitioners and academics, watching a vendor describe a software-based internal control tool for use in enterprise resource planning (ERP) systems that generated a very large matrix of incompatible duties. It struck me as being overly complex and a reflection of experience rather than the product of profound design principles. At the break, I asked the 5 professionals and academics at my table (each of whom had 10-30 years of experience) what they thought segregation of duties (SoD) meant—I got 5 different answers.
Subsequently I polled many more academic and professional colleagues and continued to get different answers. Many cited model segregating asset custody, recording and authorization, while others added initiation or reconciliation. I reviewed textbooks and professional publications and found a variety of models but no detailed descriptions of the justification for the segregations proposed. In fact, many of these resources provided examples that were incorrect. (I confirmed these inaccuracies with multiple colleagues.) Examples from professional sources yielded large, unwieldy matrices with little or no rationale provided. It seemed that when new IT tasks arose, the matrices would merely add another duty to be segregated from those already existing, leading to segregation creep. Some professional colleagues commented that firms were beginning to push back, presenting strong counterarguments. So much for the notion of a generally accepted model of SoD in the profession!
My recent Journal article
presents the IT-side of a general model for SoD to help reduce confusion. But because there are a variety of SoD definitions, my model may not be right for everyone. What would you change about this model to make it more applicable to you?
By Viktor Polic, Ph.D., CISA, CRISC, CISSP
A false sense of security puts information in danger. It results from the lack of risk perception and historical records on information security incidents. The fact that there is no indication of an incident does not mean that systems have not been attacked; businesses may just not be aware of it yet. Many reports on information security breaches show that they are discovered months and sometimes even years after the fact. One such example was Operation Shady RAT in 2011. In its annual Global Risk Index
report, Lloyd’s has upgraded cyberrisk to position 3 from position 12. Despite the growing threat of cyberrisk, many businesses still believe they are able to deal with the risk.
So how can organizations measure information security risk effectively? COBIT 5 recommends aligning IT risk management with the organization’s enterprise risk management framework. However, there is no accurate method for quantifying information security risk. The recently discovered vulnerability in OpenSSL cryptographic library (CVE-2014-0160, or more popularly called Heartbleed) illustrates a serious deficiency in the secure software development life cycle of that popular open source library. What would have been the accurate probability estimate of such a risk prior to the vulnerability disclosure?
There are many standards for measuring resistance and robustness of security devices, e.g., burglar-resistant doors, car alarm systems, electromagnetic shielding, fire protection. Those measurements could be used to help quantify safety and security risk. However there are no defined standards for quantifying information security risk. Nevertheless, there are companies that focus their skills on verifying resistance of information systems against known hacking techniques and tools. These companies are known as ethical hackers or penetration testers.
Unfortunately highly skilled manual work comes at expenses that many businesses cannot afford to spend. A novel approach, which is discussed further in my recent Journal article, is to combine automated vulnerability assessment scanners with the manual work of ethical hackers through a front-end web-based application offered as a Software as a Service (SaaS) solution. The objective is to bring down the cost of such a security audit while increasing the accuracy of risk estimates.
Read Viktor Polic’s recent Journal article:
“Ethical Hacking: The Next Level or the Game Is Not Over?,” ISACA Journal, vol. 4, 2014.
Yo Delmar, CISM, CGEIT
Last year, the Harvard Business Review published a blog post that claimed, “IT governance is killing innovation.” Since then, I have talked to many business executives—both from IT and the business side—on this very subject. Most of them attest to the mounting importance of IT governance, especially in today’s world where technology has become increasingly pervasive across all business activities.
Some of the questions that popped up in our conversations included: Can IT governance programs do more than just manage IT operations and performance? Can IT collaborate better with the business to drive innovation? Can IT governance play a more transformative role in the organization?
I believe the answer to all 3 questions is yes. Robust IT governance programs that provide real insights can actually facilitate business innovation and growth. The keywords are “meaningful metrics and analytics.” When metrics are thoughtfully developed to closely align with both business objectives and the analytics framework, they enable an organization to fine-tune its strategies and to optimize resources toward maximizing its competitive advantage.
IT metrics act as building blocks of a larger business analytics program, which can help organizations make more informed decisions when it comes to opportunities to drive business performance and innovation. Most importantly, IT analytics empower the organization with the strategic, practical and operational insights it needs to invest in IT projects that have the most transformative power.
Here are a few things to consider while developing effective IT metrics and analytics for your IT governance program:
- Ensure that your IT metrics and analytics are defined and directed by enterprise goals, not the other way around.
- Partner with the business to create metrics around emerging technologies, like social media, that can boost brand performance.
- Define metrics that can quickly adapt to a dynamic business and technological environment.
- Choose metrics that are relevant across multiple initiatives, such as IT security, business continuity, disaster recovery, crisis management and asset protection.
- Ensure that people in the organization know what is being measured, how and why.
- Do not get locked into a static set of metrics or analytics that no longer measure what matters—constantly reevaluate them and their relevance to changing business goals.
With the right set of IT metrics and the resulting analytics framework, closely aligned with business strategy and performance objectives, IT departments can become centers of innovation and competitive advantage.
Read Yo Delmar’s recent Journal article:
“Leveraging Metrics for Business Innovation,” ISACA Journal, volume 4, 2014.
By Tim Myers
Many companies rely on threadbare IT resources or external advisors to guide them in making technology decisions and are understandably wary when considering new options, especially the multitude of software as a service (SaaS) features now available to them in the cloud.
Companies that are considering cloud-based services for the first time or that have made only marginal forays into the use of public or private data centers should walk before they run. As any battle-scarred veteran of the business world knows, new programs or projects stand a much better chance of success—and widespread acceptance—if they are approached in a methodical manner.
Rather than flying into the cloud headfirst, the prudent choice may be to take a more modular approach. With advice from IT leaders and any outside experts, companies can test out the cloud by piloting it first with 1 cloud-ready enterprise function, like accounting, email services or data backup. This way, organizations will quickly learn what works, what needs tweaking and whether or not the cloud is proving to be beneficial from a return on investment perspective.
Importantly, while assessing this initial foray into the cloud, the rest of the business will run as usual. Thus, any problems or delays can be ironed out without disrupting the rest of the enterprise.
If the cloud is living up to its billing, organizations should be ready to add on additional cloud-ready functions and applications and enjoy further cost, productivity and security benefits. And given that 87 percent of cloud users surveyed
recently would recommend the cloud to a peer or colleague, the likelihood of satisfaction is high.
By Haris Hamidovic, Ph.D., CIA, ISMS IA
Fire protection best practices encompass all social actors (government bodies, other institutions, and all legal entities and citizens). Such inclusion is logical and necessary, considering the fact that a fire can occur in any area. As a result, all these social subjects are made responsible for fire protection, and fire protection must be an integral part of their regular activities. Each entity must also have an interest in protecting their personnel and property from a fire. Each entity must be aware of the causes of a fire, and secondly, each entity must be aware that it may be the cause of a fire.
Proper and consistent application of the technical norms and standards for design, installation, implementation, use and maintenance of electrical and other installations and devices is intended to prevent the outbreak of fire caused by these installations and devices. In many countries there is a legal obligation for correct and consistent application of appropriate fire protection measures provided for electrical and other installation, equipment and facilities.
The probability of fire originating in digital equipment (servers, storage units) is very low because there is little energy available to any fault and little combustible material within the equipment. But the associated risk may be significant considering IT equipment has become a vital and commonplace tool for business, industry, government and research groups. Numerous steps can be taken to avoid the risk of fire in the computer room environment. Compliance with the US National Fire Protection Association (NFPA) Standard for Fire Prevention NFPA 75 or British Standard 6266 will greatly increase the fire safety in computer rooms. These standards recommend minimum requirements for the protection of computer rooms from damage by fire and its associated effects.
Read Haris Hamidovic’s recent Journal article:
“Fire Protection of Computer Rooms—Legal Obligations and Best Practices,” ISACA Journal, volume 4, 2014.