‭(Hidden)‬ Admin Links

ISACA > Journal > Journal Author Blog
The Need for a Holistic Counterfraud Program
Zhiwei FuZhiwei Fu, Ph.D., CISA, CGEIT, CRISC, CFE, PMP, John W. Lainhart IV, CISA, CISM, CGEIT, CRISC, CIPP/G, CIPP/US and Alan Stubbs, MAS

Fraud is a common risk that should not be ignored. Any organization that fails to appropriately protect itself faces an increased vulnerability to fraud. Organizations that collect or disburse monies have long been targets of fraud—whether by actors illegally collecting US Social Security and US Medicare  or fraudulently obtaining tax refunds.
Criminal elements are becoming increasingly sophisticated and continually find new ways to game the system. Organizations also face significant increases in information flow, with a wide range of data architectures coming from disparate and unstructured sources at different velocities. This new reality brings with it new challenges. Organizations now need to find and respond to nonobvious instances and patterns of fraudulent behavior more effectively and more quickly. They must develop the analytical capabilities to actually detect fraud efficiently and predict emerging trends based on suspicious behaviors.
In our recent Journal article , we recommended organizations develop an integrated and holistic counterfraud program that enables enterprise-wide information sharing and collaboration to prevent first, detect early, respond effectively, monitor continuously and learn constantly. Specifically, organizations should especially focus on the following critical success factors for effective and efficient fraud risk management:

Fraud Risk Management by Design

  • Effective governance and clear organizational responsibility
  • Integrated framework and holistic approach
  • Rigorous risk assessment process
  • Ecosystems of counter fraud capabilities

Big Data and Advanced Analytics

  • Big data and technology
  • Intelligence and predictive analytics
  • Anticipation and preemption of emerging trends 

Risk-based Approach

  • Countering fraud is an ongoing and continually evolving process
  • Balancing enhancements of the process, organization and governance, and technological capabilities with their proper integration enterprise-wide
    Continual Collaboration and Learning
  • Fraud detection and prevention is more than an information gathering exercise and technology adoption
  • Fraud prevention is an entire life cycle with continuous feedback, learning, application and improvement
Read Zhiwei Fu, John W. Lainhart IV and Alan Stubbs’ recent Journal article:
Manage What Is Known and Not Known,” ISACA Journal, volume 5, 2014.
Why Is There Dust on Your Business Case Document?
Kim Maes, Steven De Haes, Ph.D., and Wim Van Grembergen, Ph.D.
By talking with some chief information officers, business sponsors and project managers in empirical research, we determined that most people in contemporary organizations know what a business case signifies. They have a rather good idea what should be included when developing such an investment document, and most of them understand its purpose. Moreover, they acknowledge that a business case may play an essential role in the decision making and ultimate benefits realization of the IT-enabled investment. Not surprisingly, it was found that a very large proportion of European companies develop some kind of a business case today.

However, many of these people could not give an adequate answer to how they were using such a business case after the investment was finally approved. Most of the business cases gathered dust on someone’s shelf or hard disk, and the realization of investment benefits was not tracked after the official launch of the end products and services. The practice of using a business case is characterized by the so-called knowing-doing gap. Organizations understand the importance of using a business case throughout the entire investment life cycle, but very few organizations are acting upon this important insight.

The advice coming out of our expert and case research is straightforward. First, organizations should start by clearly articulating what the investment is about and what it should realize. Second, developing and using a business case is not a solitary activity, and relevant stakeholders should be closely involved throughout the entire process of business case use. Finally, the maintenance of the business case during investment implementation requires an equal amount of attention in order to cope with escalations or capitalize on new, interesting opportunities. It should be noted that performing these kinds of business case practices will not be an easy task, say the experts, yet their effectiveness with regard to well-founded investment decision-making and investment success will be high.

Read Kim Maes, Steven De Haes and Wim Van Grembergen’s recent Journal article:
The Business Case as an Operational Management Instrument—A Process View,” ISACA Journal, volume 4, 2014.
The Wall and Boundaries—Mild Spoiler Warning
Giuliano PozzaGiuliano Pozza
John Snow, a character in the book and TV series Game of Thrones, realized that it was nonsense to have a wall dividing 2 cultural groups, with 1 group living south of the wall and 1 relegated to the north side. They were so different and yet so similar because of a shared goal:  to survive the common enemy.

I believe in IT, we are in a similar situation. Now more than ever, diverse groups who share common goals but have different backgrounds, languages and cultures are required to cooperate. Unfortunately, our effort to improve the specialization and competence of IT professionals are building a frustrating wall.
This way of operating cannot work. If we as IT professionals continue to deepen our technical and methodological skills without finding ways of effective communication and cooperation with other social groups, we are doomed to fail both in governance of enterprise IT (GEIT) and in value creation for the enterprise and society.

How can we change the status quo? This problem, of course, is not new. Social scientists studying similar situations have come up with an interesting concept called boundary objects. Boundary objects examine how different communities use information in different ways. Boundary work and boundary objects are only a part of the solution. Other basic ingredients of the recipe for effective collaboration are a shared governance framework, business and IT eLeadership, and effective communication.
Read Giuliano Pozza’s recent JournalOnline article:
A Social Approach to IT Governance,” ISACA Journal, volume 4, 2014.
Time to Act:  Operational Risk Leverage in Risk Management
Ronald Zhao, Ph.D., Frank Bezzina, Ph.D., Pascal Lele, Ph.D., Simon Grima, Ph.D., Robert W. Klein, Ph.D., and Paul Kattuman, Ph.D.
Ronald Zhao, Ph.D., Frank Bezzina, Ph.D., Pascal Lele, Ph.D., Simon Grima, Ph.D., Robert W. Klein, Ph.D., and Paul Kattuman, Ph.D.
Several comprehensive and systematic frameworks for risk management have recommended 3 lines of defense (LOD) for effective risk management and control. The focus of each of these 3 LODs is on governance, communication and human resources.
A previous Journal article of ours illustrates an automatic dynamic system of audit and internal control interactions, which helps in instilling a culture wherein the insurers and their risk counterparts articulate risk management and financial information by evidence-based management processes, which are transparent and monitorable (as required by Solvency II and recommended by Basel III). It helps with the collection, treatment and consideration of data previously considered uncertain and hidden by all the lines of activity within a firm.
The IT investor relationship management (IRM) modules of online analytical processing (OLAP) client server interactions can be customized to automate the alignment in real time of the 3 LOD principles with Basel III and the 3 pillars of Solvency II. This is achieved by using:
  1. Cost accounting modules of the first line of defense (pillar 1 and pillar 2)—The first LOD is therefore organized by the device of procedures of predictive asset management (financial and human resources [HR]) based on the interactions of the finance function and the HR function to establish the frontline employees.
  2. Cost accounting modules of the second line of defense (pillar 2)—The second LOD is the function of operations management, which automatically provides independent oversight of enterprise risk management (ERM).
  3. Cost accounting modules of the third line of defense (pillar 3)—The third LOD is jointly assured by the reporting procedures of the  HR  function and the operations management function. They are modules of pillar 3/disclosure and transparency:  Improve market discipline by facilitating comparisons and regulatory reporting requirements.
    - These modules supply data by 3 reports (Cost saving, working conditions and psychosocial risk reports) that are particularly useful for updating the risk profile when the financial and social quality of the counterparty risk (including CCR or Counterparty Credit Risk) is deteriorating.
Our JournalOnline article assesses the impact of this technology on employment in each of the G20 countries.
Read Ronald Zhao, Frank Bezzina, Pascal Lele, Simon Grima, Robert W. Klein, and Paul Kattuman’s recent JournalOnline article:
Potential Impact of IT-directed Investor Relationship Management (IRM) on Employment in G20 Countries,” ISACA Journal, volume 4, 2014.
The Importance of an Enhanced Risk Formula for Software Security Vulnerabilities
Jaewon LeeJaewon Lee, CISA, CGEIT, CRISC, CIA, CRMA
There is no doubt that the importance of IT risk management is increasing at this very moment. Across various industries, customers’ demand for a high availability of Internet services and products is increasing.
At the same time, cybercrime is getting more and more complicated, e.g., advanced persistent threats (APT). In addition, recent IT trends necessitate expanding the current IT threat horizon to areas such as big data, cloud computing, mobile banking, zero trust networks and agile . In conjunction with this, I see that lots of enterprises tend to perform intensive and comprehensive risk assessments to evaluate their IT environment.
However, any IT risk assessment thus far is based on the current risk formula (Risk = Likelihood x Impact). It typically does not consider IT’s nature and characteristics, such as IT software architectural aspects (i.e., complexity), various security requirements (i.e., confidentiality, integrity and availability), and the availability of solutions to respond to risk. 
To account for these factors, I present an enhanced risk formula (Risk = Criticality [Likelihood x Vulnerability Scores (CVSS)] x Impact) in my recent Journal article in order to calculate more effective and accurate risk ratings, particularly for software security vulnerabilities.
The benefits of using the enhanced formula, by using the CVSS calculation logic, are clear. They include criticality and risk ratings for software security vulnerabilities are calculated separately. Both IT characteristics and software architectural aspects are more clearly included. The method to estimate both criticality and risk ratings is consistent and repeatable. The enhanced risk formula is more objective. The availability of the solution to address software security vulnerabilities is considered.
These benefits help to estimate the criticality of software security vulnerabilities in the development environment, as the criticality is assessed before the potential impact is calculated.

Read Jaewon Lee’s recent JournalOnline article:
An Enhanced Risk Formula for Software Security Vulnerabilities ,” ISACA Journal, volume 4, 2014.
The Uses for a Due Diligence Framework
Bostjan Delak, Ph.D., CISA, CIS, and Marko Bajec, Ph.D.
Bostjan Delak, Ph.D., CISA, CIS, and Marko Bajec, Ph.D.

Several managers, owners and shareholders are asking the same questions daily:

  • “Acquire and merge or do not acquire and merge?”
  • “To outsource or not to outsource?”
  • “To implement new technology or not to implement it?”

Performing qualitative and effective due diligence helps to reduce the associated risk and makes decision making easier, and there are several possible ways to do this.

From 1998 to 2008, we conducted more than 40 general IS due diligences and more than 25 initial IS due diligence engagements in Central and Eastern Europe. At that time there was a lack of the due diligence frameworks. We have studied different methodologies, approaches and standards (e.g., COBIT, ITIL, ISO/IEC 9000, ISO/IEC 27000, ISO/IEC 20000, BCM, ITADD, KnowledgeLeader) and through the years we have assembled a new framework for rapid due diligence (FISDD). With this framework, IS due diligence may be delivered in a reasonably short period of time. FISDD was successfully tested on several real merger and acquisition case studies in the financial industry. It can be used for different types of IS due diligence, including:

  • Initial—should be conducted prior to the merger or acquisition of any organization
  • General—used upon the request of shareholders or an organization’s top management to determine the status of an important part of IS or to complete status of IS within the organization
  • Vendor—should be done before any outsourcing contract and should be repeated annually
  • Technology—is performed on prospective technology investments.

IS due diligence is very similar to the general IS audit process. However, due to its inherent complexity it requires a framework for delivery. Our recent Journal article introduces the FISDD framework and delivers a timeline for using it.

Read Bostjan Delak and Marko Bajec’s recent ISACA Journal article:
Conducting IS Due Diligence in a Structured Model Within a Short Period of Time,” ISACA Journal, volume 4, 2014.

Stopping the Segregation of Duties Creep and Confusion
Kevin KobelskyKevin Kobelsky, Ph.D., CISA, CA, CPA (Canada)
Five years ago I sat at a conference with leading practitioners and academics, watching a vendor describe a software-based internal control tool for use in enterprise resource planning (ERP) systems that generated a very large matrix of incompatible duties. It struck me as being overly complex and a reflection of experience rather than the product of profound design principles. At the break, I asked the 5 professionals and academics at my table (each of whom had 10-30 years of experience) what they thought segregation of duties (SoD) meant—I got 5 different answers.
Subsequently I polled many more academic and professional colleagues and continued to get different answers. Many cited model segregating asset custody, recording and authorization, while others added initiation or reconciliation. I reviewed textbooks and professional publications and found a variety of models but no detailed descriptions of the justification for the segregations proposed. In fact, many of these resources provided examples that were incorrect. (I confirmed these inaccuracies with multiple colleagues.) Examples from professional sources yielded large, unwieldy matrices with little or no rationale provided. It seemed that when new IT tasks arose, the matrices would merely add another duty to be segregated from those already existing, leading to segregation creep. Some professional colleagues commented that firms were beginning to push back, presenting strong counterarguments. So much for the notion of a generally accepted model of SoD in the profession!
My recent Journal article presents the IT-side of a general model for SoD to help reduce confusion. But because there are a variety of SoD definitions, my model may not be right for everyone. What would you change about this model to make it more applicable to you?

Read Kevin Kobelsky’s recent Journal article:
Enhancing IT Governance With a Simplified Approach to Segregation of Duties,” ISACA Journal, volume 4, 2014.
Ethical Hacking and Its Value to Security
Viktor PolicBy Viktor Polic, Ph.D., CISA, CRISC, CISSP
A false sense of security puts information in danger. It results from the lack of risk perception and historical records on information security incidents. The fact that there is no indication of an incident does not mean that systems have not been attacked; businesses may just not be aware of it yet. Many reports on information security breaches show that they are discovered months and sometimes even years after the fact. One such example was Operation Shady RAT in 2011. In its annual Global Risk Index report, Lloyd’s has upgraded cyberrisk to position 3 from position 12. Despite the growing threat of cyberrisk, many businesses still believe they are able to deal with the risk.

So how can organizations measure information security risk effectively? COBIT 5 recommends aligning IT risk management with the organization’s enterprise risk management framework. However, there is no accurate method for quantifying information security risk. The recently discovered vulnerability in OpenSSL cryptographic library (CVE-2014-0160, or more popularly called Heartbleed) illustrates a serious deficiency in the secure software development life cycle of that popular open source library. What would have been the accurate probability estimate of such a risk prior to the vulnerability disclosure?

There are many standards for measuring resistance and robustness of security devices, e.g., burglar-resistant doors, car alarm systems, electromagnetic shielding, fire protection. Those measurements could be used to help quantify safety and security risk. However there are no defined standards for quantifying information security risk. Nevertheless, there are companies that focus their skills on verifying resistance of information systems against known hacking techniques and tools. These companies are known as ethical hackers or penetration testers.

Unfortunately highly skilled manual work comes at expenses that many businesses cannot afford to spend. A novel approach, which is discussed further in my recent Journal article, is to combine automated vulnerability assessment scanners with the manual work of ethical hackers through a front-end web-based application offered as a Software as a Service (SaaS) solution. The objective is to bring down the cost of such a security audit while increasing the accuracy of risk estimates.

Read Viktor Polic’s recent Journal article:
Ethical Hacking: The Next Level or the Game Is Not Over?,” ISACA Journal, vol. 4, 2014.

How IT Governance Can Spur Innovation With the Right Metrics
Yo DelmarYo Delmar, CISM, CGEIT
Last year, the Harvard Business Review published a blog post that claimed, “IT governance is killing innovation.” Since then, I have talked to many business executives—both from IT and the business side—on this very subject. Most of them attest to the mounting importance of IT governance, especially in today’s world where technology has become increasingly pervasive across all business activities.

Some of the questions that popped up in our conversations included:  Can IT governance programs do more than just manage IT operations and performance? Can IT collaborate better with the business to drive innovation? Can IT governance play a more transformative role in the organization?

I believe the answer to all 3 questions is yes. Robust IT governance programs that provide real insights can actually facilitate business innovation and growth. The keywords are “meaningful metrics and analytics.” When metrics are thoughtfully developed to closely align with both business objectives and the analytics framework, they enable an organization to fine-tune its strategies and to optimize resources toward maximizing its competitive advantage.

IT metrics act as building blocks of a larger business analytics program, which can help organizations make more informed decisions when it comes to opportunities to drive business performance and innovation. Most importantly, IT analytics empower the organization with the strategic, practical and operational insights it needs to invest in IT projects that have the most transformative power.

Here are a few things to consider while developing effective IT metrics and analytics for your IT governance program:

  • Ensure that your IT metrics and analytics are defined and directed by enterprise goals, not the other way around.
  • Partner with the business to create metrics around emerging technologies, like social media, that can boost brand performance.
  • Define metrics that can quickly adapt to a dynamic business and technological environment.
  • Choose metrics that are relevant across multiple initiatives, such as IT security, business continuity, disaster recovery, crisis management and asset protection.
  • Ensure that people in the organization know what is being measured, how and why.
  • Do not get locked into a static set of metrics or analytics that no longer measure what matters—constantly reevaluate them and their relevance to changing business goals.

With the right set of IT metrics and the resulting analytics framework, closely aligned with business strategy and performance objectives, IT departments can become centers of innovation and competitive advantage.

Read Yo Delmar’s recent Journal article:
Leveraging Metrics for Business Innovation,” ISACA Journal, volume 4, 2014.

Considering Cloud Services? Walk Before you Run
By Tim Myers
Many companies rely on threadbare IT resources or external advisors to guide them in making technology decisions and are understandably wary when considering new options, especially the multitude of software as a service (SaaS) features now available to them in the cloud.

Companies that are considering cloud-based services for the first time or that have made only marginal forays into the use of public or private data centers should walk before they run. As any battle-scarred veteran of the business world knows, new programs or projects stand a much better chance of success—and widespread acceptance—if they are approached in a methodical manner.

Rather than flying into the cloud headfirst, the prudent choice may be to take a more modular approach. With advice from IT leaders and any outside experts, companies can test out the cloud by piloting it first with 1 cloud-ready enterprise function, like accounting, email services or data backup. This way, organizations will quickly learn what works, what needs tweaking and whether or not the cloud is proving to be beneficial from a return on investment perspective.

Importantly, while assessing this initial foray into the cloud, the rest of the business will run as usual. Thus, any problems or delays can be ironed out without disrupting the rest of the enterprise.

If the cloud is living up to its billing, organizations should be ready to add on additional cloud-ready functions and applications and enjoy further cost, productivity and security benefits. And given that 87 percent of cloud users surveyed recently would recommend the cloud to a peer or colleague, the likelihood of satisfaction is high.

Read Tim Myers’ recent Journal article:
Trial by Fire in Cloud Development Pays Dividends,” ISACA Journal, volume 4, 2014.
1 - 10 Next