Other Blogs
There are no items in this list.
ISACA > Journal > Practically Speaking Blog > Categories
The Role of Incident Management in Identifying Gaps During Stabilization Period

Rajul KambliDeploying an enterprise resource planning (ERP) system is challenging, and identifying gaps that could lead to risk is one of the most important aspects of stabilization. In my recent ISACA Journal article, I discuss how we can optimize incident management and use it to identify such gaps and risk factors at an early stage to take corrective action.

Here are some key points that any enterprise should consider during the stabilization period:

  • Channel for end users to report issues—A robust process for end users to log issues would generate comfort and provide confidence that issues are routed to the right contacts for timely resolution.
  • Structure of incident management—Ease of logging issues, timely triaging the incidents to the right teams and assigning a level of priority are the fundamentals of a good incident management process.
  • Grading of incidents—The number of incidents that may be encountered could be high, hence, a mechanism to grade and accord priority would optimize resources that are assigned to deliver resolution.
  • Review of incidents—Monitoring of number of incidents and the analysis of such incidents could reveal critical design gaps that could have a long-term impact on an organization’s process, and it could reveal governance issues.

In many of the deployment projects that I have been part of, incident management has not only aided in identifying gaps for early resolution, but also provided a mechanism to avoid a potential control and governance issue at a later date.
Read Rajul Kambli’s recent Journal article:
Incident Management for ERP Projects,” ISACA Journal, volume 3, 2019.

Simplifying Enterprise Risk Analysis

How many enterprise risk analysis reports must an organization release? A few years ago, I faced this question in light of cost, time and complexity of the solution. My conclusion is that 1 is fine.

Cost is a consequence of the details I need, the number of people involved and their time. Complexity can come from the need for training sessions (and increased costs). A lot of time spent on refreshing basic information means it is updated less frequently, and the obsolescence will decrease the quality of the results.

I want to propose a methodology to assess the risk based on 2 levels of evaluations in order to cover any need for details, to cut any redundancy in data collection, to provide simplicity in the assessment, to keep a low time to update, and to ensure great flexibility to add and maintain any new control framework with minimal cost.

It sounds complex, but it is easy enough to do. In practice, risk is the calculation of uncertainty on the achievement of the business objectives. If we connect uncertainty about objectives to the level of maturity to enforce the rules, then we can involve in the assessment all the key users, but the evaluation can be limited to their work and therefore no training will be required. Complementing this, an organization can also use a light and flexible software tool.

With this proposed methodology, we will get several types of risk analysis and related documents. We will have all the risk analysis of International Organization for Standardization (ISO) certifications, the data protection impact assessment (DPIA) of the EU General Data Protection Regulation (GDPR) 2016/679, the business impact analysis (BIA document), the risk treatment (RTP) plan, IT security assessments, the level of compliance with the laws, etc. This methodology provides this information all in a single tool, but it is managed by key users (to feed and analyze) and top management (to make decisions and approve) in a continuous and virtuous loop, each in its own set of competence.

How to do this is explained in my 2-part Journal article.

Read Luigi Sbriz’s recent Journal article:
Enterprise Risk Monitoring Methodology, Part 1,” ISACA Journal, volume 2, 2019.

What Are Challenges in Deployment and How Can They Be Mitigated?

Rajul KambliTransformation offers many key benefits, and any enterprise that would like to sustain and grow in this ever-changing, fast-paced world would be subject to the deployment of new systems. In my recent ISACA Journal article, I discuss various challenges that any enterprise might experience and how the intensity of any of those challenges would differ based on organizational dynamics and economic variables.

Here are some key points that any enterprise should consider in the deployment process:

  1. Getting the right people—Human capital is key to any endeavor’s success. Hence, considering human resources as a dispensable commodity could be fatal. Therefore, it is imperative that the leaders spend time and effort not only to get the right fit but also to retain talent to see a successful deployment of the project. People who have spent enough time in the business, if they are part of such projects, could add significant value and foresight.
  2. Selecting the right fit—Compatibility is the key for long-term sustainability of any partnership. Hence, selecting a vendor (business partner) that would be a key stakeholder in success is very important.
  3. Defining the scope—Being optimistic is important, but being pragmatic is vital. As the famous saying goes, “do not bite off more than you can chew.” It is critical that the scope of a deployment is defined, analyzed, validated and communicated well before taking the first step.
  4. Communication is key—Communication to those who will be affected by change is important since it has a direct correlation with change management and the deployment’s outcome. Most change management issues emanate not from the system but from people who do not embrace the change.
  5. Post-go-live support and health check—Similar to physiotherapy for steady recovery, appropriate post-go-live support and the right metrics to fathom stability and consistency are major indicators to reflect any necessary and timely action.

In all the transitions and deployment projects that I have been associated with, proactive steps on these considerations have been the recipe not only for success but also for continuous improvement.

Read Rajul Kambli’s recent Journal article:
Identifying Challenges and Mitigating Risk During Deployment,” ISACA Journal, volume 6, 2018.

Key Steps in a Risk Management Metrics Program

Performance evaluation of an organization’s risk management system ensures that the risk management process remains continually relevant to the organization’s business strategies and objectives. Organizations should adopt a risk metrics program to formally carry out performance evaluation. An effective risk metrics program helps in setting risk management goals (also known as benchmarks), identifying weaknesses, determining trends to better utilize resources and determining progress against the benchmarks.

My recent ISACA Journal article:

  • Discusses the need for linking key risk indicators (KRIs) to key performance indicators (KPIs), and how it helps in getting buy-in of business managers in risk management initiatives
  • Highlights how a risk metrics program can be used to integrate KRIs and KPIs for effective technology risk management
  • Leverages use of the 3-lines-of-defense model as a primary means to structure the roles and responsibilities for risk-related decision-making and control to achieve effective risk governance, management and assurance; and distribution of KRIs among the 3-lines-of-defense
  • Discusses the role of governance, risk and compliance (GRC) tools in automating the risk metrics program and provides an overview of risk metrics automation workflow in a typical GRC solution

Practical Guidance

The key steps in the risk management metrics program are:

  • Select metrics based on the current maturity level of risk management and information security practices in your organization.
  • Develop the selected metrics by capturing all their relevant details in a predefined template (called the Metrics Master) to guide metrics collection, analysis and reporting activities. Consider covering such details as objective of the metric, entry criteria giving the prerequisite for implementing the metric, tasks involved, formula to calculate the metric value, the target value set for the metric, verification and validation, and exit criteria. A suggested template with a sample entry is provided in figure 1.
  • Implement the metrics and capture the evidence of implementation in a register (called the Metrics Data Register), and transfer the relevant data values to a pre-defined template (called the Metrics Calculation Register) to facilitate computation of metrics values.
  • Analyze the computed metrics values, evaluate the trends, identify the areas for process and control improvement, and draft an action plan for continuous improvement of information security and the risk management posture of your organization.
  • Report the risk management and information security trends as indicated by the metrics to the risk manager/information security manager who would review the trends and communicate further, if required, to various stakeholders.

Start the metrics program with a small number (e.g., 6) metrics and add new metrics progressively as the risk management and information security maturity of your organization improves.

Author’s Note
Rama Lingeswara Satyanarayana Tammineedi is currently working with Tata Consultancy Services.

Read Rama Lingeswara Satyanarayana Tammineedi’s recent Journal article:
Integrating KRIs and KPIs for Effective Technology Risk Management,” ISACA Journal, volume 4, 2018.

The Benefits and Risk of Blockchain Technology

Phil ZongoBlockchain technology, which rose to prominence in 2008 with the publication of the fascinating white paper Bitcoin: A Peer-to-Peer Electronic Cash System, is widely predicted to drastically transform several sectors. For instance, blockchain-based smart contracts are anticipated to facilitate the direct, transparent and irreversible transfer of funds from donors to those in dire need, eliminating needless intermediary costs and cutting global poverty. The healthcare sector also fits the bill perfectly for blockchain implementation. Through its core virtue of decentralized architecture, blockchain could supplant archaic, fragmented and heterogenous healthcare systems—boosting the quality of patient care and lowering healthcare delivery costs. Potential blockchain use cases are as wide-ranging as the enterprises trying them.

At the same time, for all the potential of blockchain, the technology is also rife with fresh and complex business risk. My recent article explores in-depth 3 fundamental challenges business leaders should carefully consider to maximise Blockchain’s potential.

Patchy Regulatory Frameworks
Until recently, there have been very few global laws to govern digital currencies and initial coin offerings (ICOs). Regulators are starting to act, but the responses are still disjointed and sporadic. Countries such as China and Hong Kong have outlawed ICOs. Meanwhile, countries such as Australia, Switzerland and the United States have issued guidelines articulating circumstances under which an ICO is deemed a security. The Central Bank of Nigeria, on the other hand, distanced itself from Bitcoin regulation, stating that it has no intention to regulate blockchain just as it has no intention to regulate the Internet.

Inevitably, these regulatory loopholes have lured counterfeiters and Ponzi schemers. Through promises of extraordinary returns, predatory enterprises are ensnaring unwitting investors, and then vanish after closing the purported ICO. Furthermore, as the German Federal Financial Supervisory Authority, rightfully warned, “Typically, projects financed using ICOs are still in their very early, in most cases experimental, stages and therefore their performance and business models have never been tested.”

Kicking the proverbial can down the road or assuming the cryptocurrency industry will proactively self-police would be naive and turning a blind eye to the original intentions of cryptocurrency inventors. Regulators could, for example, take a cue from Canada’s Autorite des marches financiers (AMF), which extended its regulatory sandbox to ICOs, providing an important window to acquaint with ICO risk without stifling this technology. In addition, regulators should prohibit pension funds and other pools of public assets from investing in the volatile and uncertain cryptocurrencies or ICOs.

Cybersecurity and Vulnerabilities
Since inception, blockchains have been widely touted as “well-protected, reliable and immutable.” These supposed virtues have considerable merit—blockchain uses asymmetric keys to encrypt and decrypt content, thus ensuring high levels of authentication and nonrepudiation. But if we zoom into each high-profile cryptocurrency heist, we can easily conclude that blockchain is rife with security flaws. Hackers continue to exploit common issues such as lack of multi-signature, low-security hot wallets; poor input validation; insider threat issues; and a host of common defects to steal billions. Business leaders should thus carefully consider the security implications of each blockchain technology and ensure a minimum set of non-negotiable controls are baked into projects from the inception.

Impediments to Transformational Change
As with any other disruptive trend, the rise of blockchain reignites the dynamic interplay between continuity and change. For instance, blockchain renders a wide array of existing decentralized applications obsolete, most of which have operated steadily over many years and still underpin strategic revenue lines. Furthermore, an enterprise’s culture—“elements of social behaviour and meaning that are stable and strongly resist change”1—can also present significant inertia to blockchain implementations as employees resist change and stick to their old ways of working. Maneuvering past these constant dualities thus requires careful balance between innovation and business stability: None of these can be dealt with in isolation.

Read Phil Zongo’s recent Journal article:
The Promises and Jeopardies of Blockchain Technology,” ISACA Journal, volume 4, 2018.

1 Rumelt, P.R.; Good Strategy/Bad Strategy: The Difference and Why It Matters, Profile Books, United Kingdom, 2011 

Love Them or Loathe Them, Good IT Business Cases Are of Inestimable Value to Good IT Portfolio Managers

Many struggle to pull credible business cases together. Business case mechanics aside, the hard work not only involves identifying the required data, collecting them and ensuring that they are of the right quality, it also involves receiving buy-in for the business case from stakeholders, hopefully without too much fudging. That business cases can be fudged highlights the importance of an explicit assumptions section; it is a vital component of a good business case because it can be used to assess the veracity of the business case’s inputs.

In spite of how hard building a business case can be though, properly assessing the contribution of new IT investments to the organization helps prevent wasting precious organizational resources on “investments” that yield little for the organization. A good business case also helps ensure a good understanding of the dependencies of the project on various organizational resources, all of which helps ensure the business success of the IT investment.

Furthermore, an IT business case is a key part of good IT governance, and good IT governance facilitates good corporate governance. Ultimately, corporate governance ensures that IT innovation—as a particular subset of IT investment—is suitably focused on the organization’s strategy and that it is appropriately resourced to fulfil its various promises.

One of the promises of IT innovation is high returns. Those in the investment community, however, know that high returns come at a cost: higher risk. Indeed, part of the reason many corporate IT innovations fail is because this risk is not identified, thereby compromising the innovation’s key promise: to advance the organization.

Interestingly, while IT innovation may be obvious in some organizations, in others, IT innovation is often relative. For example, in an organization still running on spreadsheets, the evolution to a database may be considered innovative. In most large organizations, there is a portfolio of IT investments that can be considered innovative, at least in their terms.

Given the previously mentioned riskiness, identifying innovative IT is key. In large organizations, categorizing it is something else. Usefully, investment-grade business cases communicate 2 things about a prospective IT innovation. First, it communicates its expected financial returns, and second, it quantifies the expected variability of those returns (riskiness). Armed with these 2 parameters, it becomes easy to identify the IT investments that are innovative in the context of the organization’s risk appetite. For these investments, actively managing the identified risk is a critical success factor.

My recent Journal article “The Power of IT Investment Risk Quantification and Visualization: IT Portfolio Management” expands on all this, and sheds new light on IT portfolio management as a tool for managing different parts of the IT portfolio for maximum organizational impact.     

Read Guy Pearce’s recent Journal article:
The Power of IT Investment Risk Quantification and Visualization: IT Portfolio Management,” ISACA Journal, volume 4, 2018.

Performing Cyberinsurance “CPR”

Indrajit AtluriCyberinsurance and data privacy will garner more focus for the remainder of 2018 and beyond. The impending “Equifax effect,” which most of us anticipated, was put forth in late February 2018 by the US Securities and Exchange Commission (SEC) in the form of guidance that states that public companies should inform investors about cybersecurity risk even if they have never succumbed to a cyberattack. The guidance also emphasizes that companies should publicly disclose breaches in a timely manner.

This development perfectly aligns with the (cyber)consumers, providers and regulators (CPR) cycle (see figure 1) I propose in my recent Journal article, which basically necessitates participation from 3 key players—cyberinsurance providers, consumers and regulators. This conglomerative effort not only improves addressing and estimating cybersecurity risk from an insurance coverage perspective but also minimizes cataclysmic breaches. Providers need to be able to identify the right amount of cyberrisk that they are willing to undertake to provide ideal pricing for the coverage. This, in turn, depends on the consumers themselves to quantitatively know how much risk they own.

Figure 1: CPR Cycle
Figure 1: CPR Cycle

Today, there are numerous ever-evolving  cyberthreats (e.g., zero-day, Internet of Things botnet distributed denial-of-service attacks, ransomware attacks) that result in costs that are not inherently covered by most cyberinsurance policies. Above all this, cyberinsurance has always been an add-on policy to traditional insurance policies. Historically, insurance companies relied on abundant data to make decisions on how much auto or home insurance coverage may be offered to a person or entity. In the cyberworld, the common backlash by both the providers and consumers is that there are not enough data to rely on.

Heat maps continue to be a staple resource for IT risk professionals to estimate risk worldwide. In my experience of performing security risk assessments, I always had a disconcerted feeling leveraging heat maps to estimate risk quantitatively. Turns out, there are better and proven statistical and probabilistic methods that can be adopted to quantitatively estimate cyberrisk (in monetary figures rather than in colors of red, yellow or green), especially when there is a dearth of data. An organization’s emphasis should be on addressing the burgundy arrows in the CPR cycle, and my recent Journal article provides an overview of these methods, potential benefits and references in attaining these goals.

The purpose of attempting cyberinsurance CPR is to build a continuously maturing ecosystem comprising:

  • Cyberinsurance providers—Who will be able to provide coverage on the amount of risk they think they are undertaking and not providing a surplus
  • Cyberinsurance consumers—Organizations having the prowess to estimate risk accurately, which enables them to transfer risk to the insurance providers at optimal pricing while covering bases when a breach ensues
  • Regulators—To impose better and more timely breach reporting procedures on providers and consumers and enforce organizations to continuously adopt robust security and privacy practices

Read Indrajit Atluri’s recent Journal article:
Why Cyberinsurance Needs Probabilistic and Statistical Cyberrisk Assessments More Than Ever,” ISACA Journal, volume 2, 2018.

Leveraging Artificial Intelligence

ISACA has provided guidance on the definition and use of threat intelligence and the sources of threat intelligence. These sources range from ISACA feeds, consulting firms, open source threat information and existing tools. ISACA had indicated in this guidance that the use of artificial intelligence (AI) would be expected to extract insight from the information and monitoring systems on a more effective basis. As organizations look to improve their threat intelligence and mitigate existing and potential risk, they tend to ask what skills and resources are required to support such an initiative. Given resource constraints, organizations are exploring the use of AI to better understand the intelligence, since it can consume and more efficiently analyze large volumes of data. The enterprise must be equipped with skilled individuals capable of designing, implementing, supporting and maintaining AI technology.

The ISACA Tech Brief: Artificial Intelligence asks the following questions:

  • What regulatory and compliance requirements must be considered as part of AI deployments?
  • What are the existing capabilities of the organization and how might implementing AI benefit or impact the organization?
  • Will implementing AI require a complete overhaul of existing infrastructure? How challenging will it be to integrate artificial intelligence capabilities within the existing platform and adequately govern it?
  • Are the enterprise’s existing resources able to support AI, or will expertise need to be recruited externally?
  • How will the confidentiality, integrity and availability of large volumes of data be supported?
  • Will implementation of AI affect the personnel landscape of the enterprise, and if so, how will that change be managed?

In his Journal article, Phil Zongo asked the adopter to consider the following additional risk factors: 

  • Critical business decisions based on flawed or misused AI algorithms
  • Cultural resistance from employees whose roles are vulnerable to automation
  • Expanded cyberthreat surfaces as AI systems replace more vital business functions

My recent Journal article takes it one step further and asks the proposed adopter of AI to take a step back and evaluate:

  • What is the intent of the implementation of AI? Is the business looking to use AI to analyze the data or action the data? What is the business need and objective of AI?
  • What risk does AI pose to the business, staffing and IT models?
  • Can the organization rely on its third-party vendor for the use of AI in understanding threat intelligence?
  • Can the organization use AI to develop programmatic use cases?
  • What is the degree of maturity of the implementation of COBIT® in the organization?
  • Can a methodology and approach to measure the value added to the organization be derived?
  • Can the organization learn from associations such as ISACA where users have adopted AI for threat intelligence?

My recent Journal article raises the idea of further risk when the adopter is asking to programmatically action the threat intelligence without human intervention. This is a level of maturity for implementation of AI that organizations must consider. COBIT® 5 provides the framework from a 20,000-foot view to perform this analysis.

Read Larry Marks’ recent Journal article:
Security Monitoring as Part of the InfoSec Playbook,” ISACA Journal, volume 1, 2018.

Putting Machine Learning in Perspective

Andrew ClarkMachine learning is bantered around in the media often these days, many times erroneously. The key question that concerns auditors is not how to build machine learning algorithms or how to debate on the relative merits between L1 and L2 regularization, but rather, in what context is the algorithm operating within the business? Additionally, do we have assurance that it meets all regulatory and business constraints and fulfills the needs of the enterprise?

Data scientists, of which I am one, have the most fun working with algorithms and spending time clustered together attempting to eke out half of a percentage of accuracy from our models. However, this extra half of a percentage point almost never turns into improved results for the organization, or at least relative to the risk reward. For technology auditors, knowing how to create machine learning algorithms or understanding the actual mathematical mechanics behind the models is not required, or even very helpful, in evaluating the effectiveness of machine learning in the enterprise. As auditors, we need to provide assurance over how the algorithms are functioning in the business and find out whether proper governance and controls are in place to make sure the models are operating in the best interests of the enterprise. A favorite example of mine is that you can have a model that has 99% accuracy, say for fraud detection, that is practically worthless. For example, if 99 out of 100 transactions are not fraud, we could get a model with 99% accuracy by just saying that all transactions are not fraud. This does nothing for us though; we care about the recall of the model in this example. We want to make sure we detect the 1% of fraudulent transactions, even if we have 4 false positives. So, the moral of the story is in understanding the business ramifications and providing assurance that the models accomplish something. This is where audit can provide the most value and where we as auditors should focus:  understanding the context of machine learning algorithms and applications and providing assurance that they are fulfilling the business requirements while staying within the bounds of relative regulations.

Read Andrew Clark’s recent Journal article:
The Machine Learning Audit—CRISP-DM Framework,” ISACA Journal, volume 1, 2018.

The Risk of Third Parties

Robert PutrusI have developed a risk-based management approach to third-party data security, risk and compliance methodology and published it to provide process guidelines and a framework for enterprises’ boards of directors and senior management teams to consider when providing oversight, examination and risk management of third-party business relationships in the areas of information technology, systems and cyber security.

My business relationships and the research that I went through, a number of professional surveys indicate that information technology and security managers, directors and executives report that significant data breaches are linked directly or indirectly to third-party access. Unfortunately, these security breaches are trending upwards.

I have also found that there is an absence of a structured and quantifiable methodology to measure the third-party risk to an enterprise and what expectations are required from the third party to substantiate the evidence that sound risk management is in place.

Types of Risk a Third Party May Have on an Enterprise
When a third party stores, accesses, transmits or performs business activities for and with an enterprise, it represents a probable risk for the enterprise. The degree of risk and the material effect are highly correlated with the sensitivity and transaction volume of the data.

Outsourcing certain activities to a third-party poses potential risk to the enterprise. Some of those risk factors could have adverse impacts in the form of, but not limited to, strategic, reputational, financial, legal or information security issues. Other adverse impacts include service disruption and regulatory noncompliance.

I have to emphasize that the third parties include, but are not limited to, technology service providers; payroll services; accounting firms; invoicing and collection agencies; benefits management companies; and consulting, design and manufacturing companies. Most third-party commercial relationships require sending and receiving information, accessing the enterprise networks and systems, and using the enterprise’s computing resources. The risk posed at different levels and the impacts range from low to very significant.

In my experience, it is critical to share with enterprise management teams that outsourcing an activity to an outside entity is by no means removing the responsibility, obligation or liability from the enterprise, but these outsourced activities are considered integral and inherent to operations. As a result, the enterprise is obliged to identify and mitigate the risk imposed on it by third-party commercial relationships.

I encourage subject matter experts and professionals with management responsibility to read my Journal article describing this methodology and the quantifiable representation, which is a risk-based management approach to third-party data security, risk and compliance, as shown.

Figure 1

Read Robert Putrus’ recent Journal article:
A Risk-Based Management Approach to Third-Party Data Security, Risk and Compliance,” ISACA Journal, vol. 6, 2017

1 - 10 Next