“Artificial intelligence (AI) is proving to be a double-edged sword. While this can be said of most new technologies, both sides of the AI blade are far sharper, and neither is well understood.” - McKinsey Quarterly April 2019
In Greek mythology, the courtier Damocles was forced to sit beneath a sword suspended by a single hair to emphasize the instability of kings’ fortunes. Thus, the expression “the sword of Damocles” to mean an ever-present danger.
To use this idiom metaphorically, the users of artificial intelligence are like kings, due to the amazing and incredible functionalities brought in by this cutting-edge technology, but have a sword hanging on their head due to the perils of such highly scalable nature.
Artificial Intelligence: Meaning and Significance
To quote a formal definition, AI is “the art of creating machines that perform functions that require intelligence when performed by people.” - Kurzweil 1990.
However, intelligence is a more elusive concept. Though we know that humans require intelligence to solve their day-to day-problems, it is not clear that the techniques used by computers to solve those very problems endow them with human-like intelligence. In fact, computers use approaches that are very different from that of those used by humans. To illustrate, chess-playing computers used their immense speed to evaluate millions of positions per second – a strategy unable to be used by a human champion. Computers also have used specialized techniques to arrive at the consumer’s choice of products after sifting through huge data, identifying biometric, speech and facial recognition patterns.
Having said that, humans use their emotions to arrive at better decisions, which a computer (at least at present) is incapable of doing. Still, by developing sophisticated techniques, AI researchers are able to solve many important problems, and the solutions are used in many applications. In health and medical disciplines, AI is able to contribute and provide advanced solutions, by yielding groundbreaking insights. AI techniques have already become ubiquitous and new applications are found every day. Per the April 2019 McKinsey Quarterly Report, AI could deliver additional global economic output of $13 trillion per year by 2030.
AI Risk and Potential Remediating Measures
Along with all the aforementioned positive outcomes, AI brings in innumerable risks of different types, potentially ranging from minor embarrassments to those highly catastrophic in nature, potentially endangering humankind. Let us enumerate and detail some of the risks known to be brought on by AI:
1. Lack of Complete Knowledge of the Intricacies of AI
AI is a recent phenomenon in the business world and many leaders are not knowledgeable about potential risk factors, even though they are forced to embrace it due to market and competitive pressures. The consequences could be anything from a minor mistake in decision-making to loss of customer data leading to privacy violations. The remediating measures are to involve and make everybody in the enterprise accountable and also to have board-level visibility in addition to having a thorough risk assessment done before embarking on AI initiatives.
2. Data Protection
The huge amount of data which are predominantly unstructured and are taken from various sources such as web, social media, mobile devices, sensors, and the Internet of Things is not easy to protect from loss or leakage, leading to regulatory violations. A strong end-to-end process needs to be built, with robust access control mechanisms and with a clear description of need-to know-privileges.
3. Technological Interfaces
AI mainly works on interfaces where many windows are available for data feeds coming from various sources. Care should be taken to ensure that the data flow, business logic and their associated algorithm are all accurate to avoid costly mishaps and embarrassment.
This is a big issue, as evidenced by ISACA’s Digital Transformation Barometer, which shows that 60 percent of industry practitioners lack confidence in their organization’s ability to accurately assess the security of systems based on AI and machine learning. AI works on a huge scale of operations, so every precaution is to be taken to ensure the perimeter is secured. All aspects of logical, physical and application security needs to be looked into with more rigor than would otherwise be warranted.
5. Human Errors and Malicious Actions
Protect AI from humans and humans from AI. Insider threats like that of disgruntled employees injecting malware or wrong coding could spell disastrous outcomes or even lead to catastrophic events like the destruction of critical infrastructure. Proper monitoring of activities, segregation of duties, and effective communication and counseling from top management are good suggested measures.
The deployment of AI may lead to discrimination and displacement within the workforce, and also could result in loss of lives for those who need to work with AI machines. This could be effectively remediated by upskilling and placing humans in vantage points of supply chains whereby they play an important role in sustaining customer relationships. To prevent workplace perils related to AI, rigorous checking of scripts and installation of fail-safe mechanisms, such as overriding the systems, will be helpful.
6. Proper Transfer of Knowledge and Atrophy Risk
The intelligence required by humans to solve a problem is transferred to machines through programs, so that it will resolve the same problem at a much larger scale with great speed and accuracy. Therefore, care should be taken so that no representative data or logic is left out or erroneously pronounced, lest it result in poor outcomes and decisions with losses to the business.
Because a skilled human will cede tasks to be executed by machines, such skills in humans could be eroded over time, resulting in atrophy. This could be partly remediated by keeping an up-to-date manual on such critical skills, including disaster recovery mechanisms.
Disclaimer: The views expressed in this article are of the author’s views and does not represent that of the organization or of the professional bodies to which he is associated.
As a risk practitioner, have you ever tried to describe what you do for a living to a family member or a friend? If so, you’ve likely experienced their acquiescent and politely confused reaction as you articulate concepts like risk assessments, controls, tests, tolerance, appetite, key risk indicators, governance and a host of other tactics that are commonly executed as part of a practitioner’s day-to-day responsibilities. At the conclusion of your pride-filled intellectual description, you feel like you did a great job explaining what you do, when your conversational partner replies with, “Wow, that sounds awesome! So, what do you actually do?” Uncertain about how to respond, you begin to retrace your words only to realize that internally, you are asking yourself that very same question, combined now with an unclear perspective about your professional identity. You ponder, “What DO I do, and, who am I as a professional?”
Over the past 20 years, I’ve observed a plight all too common among risk practitioners wherein there is an enthusiastic rigor to schedule tasks, complete action plans, provide reporting/updates and declare that risks have been mitigated, when the most certain of questions is to follow: “So, what risk did we eliminate/reduce and how does that add value to our organization?” The enduring effort to complete tasks and assignments by the risk practitioner propagates and reinforces an illusion of risk management, because work, in the form of tasks and actions, was completed.
Reality strikes! In absence of utilizing an industry framework with principles, common taxonomy and structured objectives to clearly articulate how issues, losses and events are being prevented or reduced, the risk practitioner’s reputation, brand, self-esteem and identity progressively deteriorates. I’ve equipped hundreds of professionals with the training and tools provided by the CRISC certification and the outcome is nearly always the same, where CRISC training/certification served as a catalytic fuel energizing the risk practitioner’s identity while at the same time accelerating organizational maturity in the direction of a value-driven, risk intelligent culture. Here is how:
Individuals Identify Themselves as Competent and Confident Practitioners
- A Strong Foundation: They learn the basics, they speak a common language and they use a proven methodological approach
- A Community of the Like-Minded: They are part of a formally recognized community of professionals
- A Distinction: They have made it through the studies and requirements necessary to obtain the CRISC distinction
- Unlocking Strategic, Big-Picture Thinking: Their competencies become habits, freeing up their mind to think more broadly with intriguing inquisition
- Clearly Articulating Value: Labeling/linking value and purpose effectively with executives, second/third line and examiners
Organizations Evolve to a Risk Intelligent, Value-Driven Ecosystem, Fueled by Trained Practitioners
- Organic Neural Networking Within the Company: Team members formed their own think/brain tanks resulting in multiple innovations/enhancements within the first few months after CRISC training
- Advancing and Benchmarking Industry Expertise: Team members developed external relationships within and across ISACA chapters to anticipate opportunities, prevent issues/events, and design better controls
- Organic Employee Development Ripple Effect: Coaching took on a natural form, where CRISC candidates willingly encouraged, coached and mentored others
When you were asked about what you do for a living, it would have been so much easier to reply with something like: “I prevent bad things from happening to our customers/company. When I do my job well, my customers are safe and secure, and my company’s brand becomes stronger.”
With CRISC as an enabler, your employees will grow, develop and identify as professionals, and your organization will become enmeshed in a risk culture that is strong, resilient and organically intelligent.
Editor’s note: To find out more about the custom training program opportunities offered through ISACA, visit ISACA’s enterprise training page.
Artificial intelligence (AI) and machine learning are common terms in the world of emerging technology. Although still sounding futuristic to some people, AI is already being deployed everywhere from fantasy football weekly recap emails, to retail environments, to advanced, state-sponsored surveillance systems. In ISACA’s Next Decade of Tech: Envisioning the 2020s research, a recent survey of more than 5,000 global technology professionals, 38% of respondents expect AI and machine learning to be the most important enterprise technology of the next decade – more than cloud platforms (22%), big data (16%) and even blockchain (8%). Ballooning costs, labor shortages, poor service quality, strong public interest, and recent market shifts forcing the enhanced availability of electronic records are strong indicators that few industries will experience the impact of AI more than healthcare.
Taking a step back, the healthcare field has an essential yet polarizing role in today’s society – residing precariously at the inflection point of people’s health and a multi-trillion dollar for-profit industry. As introduced In William Kissick’s book, Medicine’s Dilemmas: Infinite Needs Versus Finite Resources, the “Iron Triangle of Health Care” is a simple but effective depiction of the resulting balancing act:
The three interlocking factors of the Iron Triangle – access, cost and quality – demonstrate an industry forced to make seemingly impossible trade-offs between the ability to provide quality care to everyone that needs it while also attempting to contain skyrocketing costs. Where do AI and machine learning fit into this seemingly hopeless triangle of despair? They don’t. Instead, they have the potential to be a disruptive force powerful enough to break the traditional model and improve all three factors at once – albeit not without consequences.
A telling illustration of this phenomenon is the partnership between IBM Watson and University of North Carolina Lineberger Comprehensive Cancer Center, as reported in a 2016 episode of 60 Minutes on CBS. In a pilot designed to test how AI could be deployed in a clinical oncology environment, IBM “taught” Watson to read medical literature in about a week, and a week later it read 25 million published medical papers and had the ability to continuously scan the web for the latest medical research. Traditionally, analyzing a patient’s individual genetic mutations and other relevant information against the vast population of medical literature and open clinical trials could take days or weeks. The analysis performed was highly manual, and relied on doctors’ ability to stay current on clinical trials happening around the world. Armed with this vast body of knowledge, doctors fed Watson actual cases from cancer patients whose treatment had exhausted all options known to the panel of experts at the time, and in over 90% of the cases, Watson identified the same experimental treatment options as the panel of experts. However, even more striking, in roughly 30% of the cases, Watson was able to identify a potential treatment not previously considered by the panel.
The results of this limited trial are by no means a silver bullet, but the outcome is especially promising in the context of the Iron Triangle because it demonstrates how AI can be used to improve the quality of care (Quality) for more patients (Access), with fewer doctors and in less time (Cost). While AI cannot offer a comforting presence and doesn’t have the capacity for genuine empathy – both considered important factors leading to positive medical outcomes – as AI penetrates the time-intensive world of health care administration (claims, billing, fraud detection, etc.), it should allow doctors to spend far more of their time in patient-facing roles.
The UNC/Watson pilot is one demonstration of the countless potential use cases for AI in healthcare. Drug development, virtual chatbots, dictation support for physicians and complex data analysis capable of predicting likelihood of hospital re-admittance are among the other uses currently being piloted and implemented across the world.
Given the acute focus on healthcare costs and outcomes in today’s political climate, and a lack of concrete solutions with unified support, AI is poised to take center stage in the next decade. Without question, there are legitimate concerns around the accuracy of AI-driven results, patient privacy and the potential for companies to misuse the vast troves of newly available data. However, as information technology professionals focused on compliance, risk management, and security, our role in this impending shift will prove critical in ensuring AI is deployed securely, data is used appropriately, and the results delivered are accurate and actionable.
The rapidly increasing pace of technology change and digital disruption leads to an unprecedented pace at which organizations must address opportunities and risks that could make or break their success. In the new decade of the 2020s, technology-driven exponential change will accelerate even more sharply. Unfortunately, most organizations are ill-prepared for what is to come, and will remain so unless they replace their reactionary approach to the technology landscape with an anticipatory one.
Reactionary strategies are reliant on attempting to become more agile and react quickly after a disruption or problem occurs – perhaps an unforeseen risk related to deploying a new technology or a competitor’s new product that is suddenly commanding market share. While the ability to muster an agile response is an important competency for organizations to possess, the organizations that will succeed in the 2020s and beyond will be the ones that become anticipatory, using hard trends (based on future facts) to identify disruptions before they disrupt and to pre-solve predictable problems.
In ISACA’s Next Decade of Tech: Envisioning the 2020s research, many of the sweeping advancements that will reshape the technology landscape become evident:
- Ninety-three percent of respondents say the augmented workforce (people, AI and robots working closely together) will reshape how most or some jobs are performed
- Respondents say AI/machine learning will be the most important enterprise technology of the next decade, yet only half of respondents think enterprises will give AI’s ethical considerations sufficient attention
- Relatively few respondents think enterprises are investing enough in the technology (30%) or people skills (19%) to successfully navigate the changing technology landscape of the 2020s.
Given these and other changing dynamics, organizations need to take an honest look at their approach to managing technology change and commit to becoming far more anticipatory – or face the likelihood that they will be disrupted, losing ground to competitors. One of the best starting points is to learn to separate the hard trends that are based on future facts from soft trends (assumptions about the future that may or may not happen). If you spot an emerging trend and it’s a hard trend, the you know it will happen and that allows you to turn disruptive change into an advantage. But a trend by itself is not actionable. It is imperative that you identify a related opportunity to each trend. For example, we all know that 5G wireless will be rolled out at an increasing rate globally in 2020 – that’s a future fact. But to give this hard trend life, you need to identify the 5G opportunity for your organization. Remember, billion-dollar companies such as Uber were started when 4G rolled out thanks to new wireless bandwidth opportunities. If you determine that the trend you observe is a soft trend, there are opportunities in how you might want to influence that trend since it is not a future certainty. For example, healthcare costs in the US are going up, but this is not a future fact, making it a soft trend. An example of an opportunity to influence this trend would be to use blockchain to bring greater pricing transparency, trust, security and competition to the current healthcare ecosystem.
So, what is holding organizations back from progressing to a more anticipatory state? While dealing with inertia related to underlying legacy technology might be part of the issue, the larger challenge is changing our legacy thinking. It’s our legacy thinking that keeps us trapped in a break/fix cycle instead of predict and prevent. Even though most of us intellectually understand that technologies such as AI, blockchain, the Internet of Things, 5G and much more call for new business models and approaches, we’re too busy dealing with day-to-day challenges to recalibrate in meaningful ways. Yet the more we hunker down, just trying to keep up, the less likely we are to be anticipatory, and the less prepared we will be to capitalize on the inevitable disruptions to come.
Another needed mindset shift in the new decade will be resisting the temptation to view new technology capabilities through an all-or-nothing prism. There needs to be room for nuance – a blending of the old and new ways of operating. Take driverless vehicles, which 48% of ISACA survey respondents think will be mainstream in their countries by the end of the decade. While many passengers might not be comfortable giving full control to a driverless vehicle at a high rate of speed on an expressway, there will be other circumstances in which passengers are receptive to grabbing an autonomous ride. Many drivers like to drive, but no drivers like accidents. Semi-autonomous vehicles that let you drive when you want and keep you from having an accident will be a winning combination in this new decade. And in many areas, such as a large campus or large industrial park, full autonomous vehicles will be the best option.
Similarly, the advancements of AI in the medical arena will not – and should not – eliminate the need for doctors, as diagnostic and treatment options can be optimized with a human doctor analyzing AI-generated data and insights. And while using cash is decreasing at a rapid rate in an era of mobile payments, there might be certain circumstances and geographies in which cash remains useful. Rather than debating whether the new or old way of doing things is superior, a far better strategy is to integrate the old with the new technology so that the integration provides more value than the old or the new by itself. This provides a strong pathway forward.
The technological forces likely to shape the next decade call for practitioners and their organizations to become anticipatory rather than reactionary. Almost regardless of the area – cybersecurity, behavior analytics, quantum computing – we are moving beyond the stage in which reacting quickly to changes as they occur will be sufficient. It is time for organizations to identify the hard trends in technology that are shaping the future, attach relevant opportunities to those trends, and take action in order to thrive in the next decade and beyond.
Author’s note: To receive a complimentary hard copy (if in the US) or electronic copy (if elsewhere) of The Anticipatory Organization, visit theaobook.com. Only shipping costs will apply. For further insights from Daniel Burrus on becoming anticipatory, follow Daniel on LinkedIn.
The potential of blockchain technology has inspired hype and buzz for years. However, we are really starting to see implementation in various sectors. Use cases have been extremely beneficial in industries such as banking, healthcare, and security. One such industry where the technology is gaining prevalence is travel and hospitality. Although we have not seen any full-blown disruption there just yet, this could be on the horizon. Technically, these can be considered two separate industries, but for practical purposes can be grouped together.
Advantages of Blockchain
Blockchain is essentially a publicly available ledger, or list of digital archives, where individual records are stored namelessly and permanently. The individual records, or “blocks,” are crypted while uploading, so once the information has been logged in the blockchain, it is permanent and cannot be altered or erased. That is also what makes it so unique as all the data available on blockchain is decentralized and is shared through a peer-to-peer network.
Below are some of the key advantages of blockchain:
- Transparency – All the individual pieces of data come together to form a digital ledger. Since the data is hosted on a peer-to-peer network, this ledger is accessible to everyone. Essentially, everyone on the network can always access all data.
- Security – A blockchain can potentially be one of the most secure and protected forms of data storage. All the data on the network is not only encrypted but is also linked to the previous transaction on the ledger. Additionally, the data is stored across various devices on the network, protecting it from hackers.
- Cost-effective – Blockchain will help you reduce your business cost by a fraction on many fronts. Firstly, you no longer need third parties to manage transactions and keep records. Also, you will be able to track your functioning in a more meticulous manner that will reduce discrepancies.
- Efficient – Since all the data is available on one ledger, the amount of clutter and unnecessary documents and files are reduced substantially. At the same time, it is also easier to track your data when you need it instead of going through stacks of files.
Blockchain in the Travel Industry
Below are the ways in which blockchain is being implemented in the travel industry:
- Payments – Certainly one of the most prominent uses of blockchain is in digital payments, and the same goes for the travel industry. The payments are secured and trackable through the blockchain network and at the same time very convenient. This can be done through a mode of cryptocurrency or even through a streamlined bank payment method that is an upgrade from the traditional approach.
- Security – This is a very important and sensitive aspect of travel, and blockchain is making an impact here as well. The blockchain can store data and ID information of every passenger and identify potential threats. Used properly, this technology has the potential to drastically decrease check-in times and queues in airports, as a simple retina scan or fingerprint can be used instead of documents.
- Tracking luggage – Every so often, passenger luggage does end up getting misplaced or sent to an incorrect location. Such problems and other such logistical challenges can be managed thanks to the blockchain technology. The decentralized nature of the database can help companies share the information to the benefit of everyone involved, especially in flights where luggage changes hands several times.
- Loyalty program – Loyalty programs are very important for many frequent travelers and help the companies generate a significant amount of revenue. With the help of blockchain, these programs can be made better by simplifying the process of earning and redeeming reward points. The access and distribution of reward points can also be simplified thanks to this technology, as they can be made available anytime and anywhere.
Current Examples of Blockchain in the Travel Industry
Although blockchain does have unlimited potential for future technologies, it is already being implemented in many businesses today:
- LockChain – This is a marketplace that allows individuals and businesses to rent out their spaces and property. Thanks to the decentralized nature that eliminates the middlemen, there is no commission fee charged. This is a business that can potentially disrupt the industry and challenge companies like Airbnb, Booking.com, etc.
- TUI – Tourism company TUI can maintain and manage inventory through the use of blockchain. TUI is one of the few mainstream tourism companies to fully commit to the tech and is now seeing the benefits.
- Trippki – As mentioned earlier, loyalty programs are the cornerstone of any good hospitality company. Trippki offer TRIP tokens to users and is now being powered by blockchain technology. Once again, since there is no third party involved, there is a direct connection between the company and its consumers.
- ShoCard – One of the uses of blockchain is in the identity management space, which is where ShoCard has been implemented. Customer ID details can be uploaded and then shared or retrieved at any time to verify their identity. This not only helps with security but is also very convenient and efficient.
Though these are just a few examples of blockchain’s potential, there is so much more to come. Since it is a recent technological advancement, we have only been able to scratch the tip of this potential iceberg. According to some forecasts, the global blockchain market is expected to be worth US$20 billion by 2024. With new industries and new use cases coming up every day, it won’t be long before most of the consumer base starts interacting with this technology on their travels and beyond.
Author’s note: Harsh Arora is a proud father of four rescue dogs and a leopard gecko. Besides being a full-time dog father, he is a freelance content writer/blogger and a massage expert who is skilled in using the best massage gun.
Cyber risk has understandably become a focal point for enterprise risk managers, but the risk landscape is multi-layered and extends beyond the realm of cybersecurity. In addition to contending with a daunting array of cyberthreats, enterprises are determining how much risk they are willing to accept in deploying emerging technologies, working through a heightened focus on customer privacy and adjusting to changes in the regulatory environment.
New industry research from ISACA, CMMI Institute and Infosecurity shows that enterprises are struggling to manage and optimize their risk, not only when it comes to confronting cyber risk, but in gathering a firmer handle on the holistic enterprise risk environment. Below is my perspective on three data points from the research that I found to be particularly significant:
The shifting threat landscape is wreaking havoc. Changes/advances in technology and changes in types of threats were pinpointed by survey respondents as the top two cybersecurity challenges organizations face today, even moreso than other response options, such as too few security personnel and inadequate security budgets.
This data point reinforces that the unprecedented pace of technological change – and the corresponding domino effect on the threat landscape – is placing a heavy strain on the capabilities of enterprises to effectively and securely leverage these new technologies. Security and enterprise risk programs that were sufficient five years ago – or, in some cases, maybe even five months ago – can be inadequate in holding up to new risks that emerge.
Risk management is about optimizing risk, not removing it from the equation altogether, so these challenges should not preclude enterprises from thoroughly testing and exploring how emerging technologies can be deployed to create efficiencies and spark innovation.
The ISACA study found that while nearly two-thirds of respondents’ have defined processes for risk identification, only 38 percent feel that those processes are at either the managed or optimized level of the maturity spectrum for risk identification. This points to a high adoption, but low optimization trend, demonstrating room for improvement in terms of enterprises actually taking action to address risk, and not just setting up the framework.
Security and risk professionals must revisit their processes, pursue the ongoing training and knowledge resources needed to understand how these technologies are reshaping the risk environment, and communicate those risks clearly to enterprise decision-makers who might be tempted to green-light deployments based on market pressures without first conducting the needed level of due diligence.
Cloud was identified as the emerging technology that most increases risk. By an overwhelming margin, cloud is deemed to be the technology that most expands risk (70 percent of respondents say it increases risk, compared to the next highest response option, Internet of Things, which came in at 34 percent).
As the survey report notes, “There is a good reason why the cloud percentage is so high – practitioners are intimately familiar with the challenges of cloud, including compliance and regulatory challenges, data sovereignty, lack of direct operational control over service provider environments, shadow adoption, and numerous other pain points.”
Essentially, cloud-related risk is much more of a known commodity than risk related to more recent, emerging technologies. However, if organizations align their cloud projects to business strategies and provide relevance governance oversight, cloud risk can be appropriately mitigated.
This data point also raises questions about how technologies that are less mature than cloud – such as artificial intelligence and blockchain – will impact enterprise risk as adoption increases and more use cases arise. Each technology brings its own set of risks and potential misuses that will need to be accounted for in enterprises’ risk programs.
Reputational risk should not be overlooked. Respondents identify reputational risk as the second-most critical area of risk facing their organizations today, behind only information/cybersecurity risk. While respondents naturally identify cyber risk as a leading concern, given the volume and increasing sophistication of the current threat landscape, ultimately, reputational risk can have an even longer-term impact on an organization. There are countless examples of enterprises that have become embroiled in a public relations crisis and never fully recovered – or if they do, only after several years of concerted time and expense dedicated to rehabilitating their brand image.
Of course, cyber risk and reputational risk often go hand-in-hand, given that the fallout from major breaches and other cyber incidents can have a direct and serious impact on an enterprise’s reputation with customers and the general public. But reputational damage also can arise from a variety of other sources, such as fiscal mismanagement, penalties from regulatory compliance oversights and a lack of transparency with customers when it comes to how their personal data is being leveraged.
Even greater challenges ahead
The considerations mentioned above are just some of the many topics that enterprise risk leaders will need to work through in the 2020s and beyond. The risk environment will only become more complex in the new decade, as the aforementioned pace of technology-driven change will further accelerate, with the evolving cybersecurity landscape and the rise of AI factoring prominently into that equation. Managing and optimizing risk have long been essential objectives for high-performing enterprises, but the stakes are rising – as is the degree of complexity.
Editor’s note: This post originally appeared in CSO.
Based upon my experience in Enterprise Risk Management, I was not surprised to see respondents to new State of Enterprise Risk Management research from ISACA, CMMI Institute and Infosecurity identify risk identification and risk assessment to be the most employed risk management steps in their organizations. Nor was I surprised to see that only 38 percent of respondents indicate that their enterprises have processes at either the managed or optimized level for risk identification. In my experience, this happens often due to the suboptimal execution of the risk identification process.
As the report states in the Executive Summary, “Risk management is about optimizing risk rather than removing it entirely.” It has always been my belief that risk management serves two purposes. The first is to keep the enterprise from stepping unwittingly into a big pothole. The second is to provide the executive team with the last best piece of information required to optimize the use of risk capital across the enterprise.
In order to successfully deploy an enterprise risk framework across an organization, it is always best to be practical and expedient to the extent allowed by your regulatory environment. Where I have seen this go wrong most often is in the deployment of an enterprise-wide risk assessment. I’ve seen instances where an enterprise assessment completely missed accounting for the biggest risks, usually produced by enterprises that do not have the right participation from top management. Further, I’ve seen enterprise assessments get so detailed as to tie the organization into knots. A friend in the consulting business told me of a project in which an unnamed regional bank was in the process of unwinding a risk assessment that had paralyzed the institution with 52,000 items of identified requiring remediation. A risk assessment run amok ties up valuable resources in an endless loop leading to the suboptimal allocation of resources within the business as well as risk management.
Below are several (what I hope are) practical recommendations to try to avoid this phenomenon.
1. Big risks can be ignored when the right people aren’t in the room for the conversation. Start at the highest level within the organization and get the people in the room that own the risk from the top down. This keeps the right themes in play and avoids the well-meaning though less informed from dragging the exercise down to a mind-numbing level of tedium. A risk assessment needs to be the business or operating function’s view, guided and respectfully challenged by risk management. Including the right people in the process from the outset creates buy-in to and ownership of the results.
2. When constructing your risk assessment, keep to a five-box chart. Anything greater invites a significant amount of conversation parsing the shades of gray while providing immaterial benefit.
3. A risk assessment is NOT a SOX process. This is not about curing control deficiencies; this is about managing risk to an acceptable level after controls have been put into place. After you have determined the Residual Risk Rating in a risk assessment, there should be an evaluation as to whether or not a risk is “worth” fixing from a financial, reputational or strategic perspective.
4. In your enterprise risk framework, include a formal Risk Acceptance process. Here is where you may declare that as an organization any residual risks that end up in the lower-left quadrant may be risk accepted and no steps need be taken to cure. If this risk acceptance process is well documented, reasonable and supportable, it should pass muster with any regulator. A risk assessment should be reevaluated annually to keep an eye on risk migration.
5. Make sure that the Impact and Likelihood scales reflect the size and maturity of the organization and are clearly discussed and agreed upon by all participants through the risk governance process. This will help keep the minutia and disagreements from creeping into the process. Consult your finance team or head of investor relations (if publicly traded) to obtain a sense of what external constituents may feel is material when constructing a table for discussion. Another suggestion is to listen to your company’s earnings call, if publicly traded, and pay attention as to how earnings are discussed and the questions asked by the analyst community. It will tell you what rises to the level of materiality to your shareholders.
View large table.
6. Agree that the risks in the upper right-hand quadrant of the Residual Risk chart have the highest priority with regard to mitigation strategies and deal with those first. Provide a reasonable expectation and timeframe for the moderate risks.
7. Be sure that executive management and the board agree and sign off on the results of the final risk assessment, including the scales used in your charts and the risk acceptance process.
An appropriate risk assessment process is a valuable tool in managing enterprise risk. Improperly deployed, it can result in poor allocation of resources. I am confident enterprises would prefer resources spent on mitigating material risk issues rather than doing risk assessments that add little marginal value. Enterprise Risk Management should be a partner with the business in ensuring an appropriate risk-adjusted return is made for the entity’s constituency. It is inevitable that a natural tension exists in that relationship, but reasonability, transparency and participation create buy-in into the process and ownership in the results.
In today’s environment, companies all over the globe are experiencing culture risk. Yes, culture indeed has an impact on risk and every company has a unique culture. The key is to understand it, manage it, and leverage it when possible to obtain competitive advantage. Every company is faced with both positive and negative risk – that is, threats and vulnerabilities that could adversely impact the organization, its reputation and stock value, as well as opportunities that could have a positive impact. While there are many factors that impact the risks that a company faces, many times business leaders overlook and underestimate the impact of company culture.
So, what makes up company culture? Company culture is the character of a company. It sets the tone of the environment in which employees work daily. Company culture includes a variety of elements, including company strategy, mission, vison, value, policies and behaviors. Recently, many major organizations like Google and Microsoft are revamping policies and procedures to address issues such as sexual harassment, racism, and discrimination because of the negative impact these cultural behaviors have had on the overall success of the company. Policies and procedures are tools that can be used to hold individuals accountable for their behavior. The key is ensuring that everyone adheres to the rules. It is also important to visibly reward good behavior and punish bad behavior on a consistent basis.
Once policies and procedures are put in place, it is important to gauge their effectiveness. Are the policies being followed and do they need to be modified in any way? Organizations that are truly committed to the idea will institute monitoring mechanisms to ascertain this information. Oversight and reporting tools that are properly implemented will allow employees at all levels to feel free to report breaches without fear of retribution. The actions of the oversight function to move quickly and consistently on reports will encourage a culture of accountability. The lack of such functions leaves an enterprise at risk of high-turnover, unmotivated employees, and even potential lawsuits. Tools and procedures such as anonymous hotlines, required compliance training, and explicitly stated company values could be viewed as ways to mitigate such risk.
Simply instituting tools, policies, and procedures could be largely ineffective if the organization’s leadership doesn’t first take a long hard look at the current state of affairs. What is the employee demographic (age, gender, educational status, etc.)? Understanding backgrounds and human behavior can be key to having a clear picture of the culture within an enterprise. For instance, studies have shown that millennials view and respond to the world, including the workplace, in a very different way than older professionals. Understanding people helps an organization refine its culture, including the inherent risks associated with it.
There are many factors that typically impact the culture of an organization, including industry regulations, the competitive environment and economic climate. These factors have direct and indirect influence on how people make decisions on a daily basis. Leadership should set clear expectations about what is acceptable behavior in light of these factors. Influencing culture is not easy and can be time-consuming and costly. However, the cost of doing nothing can be even greater.
The benefits that can be realized from using third parties to support the delivery of products and services are always part of any good sales pitch by prospective vendors. Often these benefits include reductions in operational spend, scalability, improved delivery time, specialized capabilities, and the availability of proprietary tools or software, all of which equate to a competitive advantage for companies leveraging third-party relationships effectively.
Companies recognize and capitalize on these advantages: A study in 2017 of nearly 400 private and public companies reported that two-thirds of those companies have over 5,000 third-party relationships, according to a report released by the Audit Committee Leadership Network. This staggering statistic illustrates how deeply organizations have come to rely on third parties for everything from back-office activities (payroll, help desk, business continuity infrastructure, etc.) to customer-facing roles (call center, sales and distribution, marketing, etc.). But this heavy reliance also elevates third-party risk management from a “nice to have” capability to a business imperative.
While these relationships provide the opportunity for an organization to realize significant benefits, they also introduce a number of potential risks. Before deciding to outsource responsibilities, business leaders must have a broad understanding of their organization’s risk landscape and develop an approach to evaluate the risks introduced by using third parties. Shifting the focus from saving money to creating value is one way companies can start thinking differently about how they manage third parties.
How Do I Know What I Should Outsource?
The most essential step is knowing the value your organization brings to the market.
As an example: If your company is known for developing and distributing high-quality instruments, outsourcing your manufacturing operations is not the best place to start. Issues with that third-party relationship are likely to be customer-facing and impact your hard-earned reputation for precision and quality. Additionally, the skillsets and facilities required to manufacture your product may not be widely available, making your business effectively a hostage of your vendor.
In contrast, if you decide to outsource a function like a payroll, even though poor performance might be an annoyance for employees, it is easily remedied by switching to one of the many alternatives available. There also is no direct customer impact in the short term, so your reputation remains intact.
The most successful outsourcing relationships allow companies to focus on the value they deliver to the market by outsourcing activities that require significant resources or specialized abilities but are outside an organization’s core competencies and not aligned with their long-term strategic vision.
How Should I Perform Due Diligence on Potential Third Parties?
Once you have identified which processes can be outsourced as well as their inherent risks, you can begin performing due diligence on potential vendors. The level of due diligence should be tailored to the significance of the relationship as well as the potential risks it poses. Document your requirements and request prospective vendors to address each item directly, rather than allowing the vendor to give you their boilerplate sales pitch, as they are typically designed to gloss over or avoid known weaknesses. Make sure you are comfortable with any capability or control gaps and have considered whether internal resources can shoulder the additional burden.
We Have Selected a Third Party to Engage – Now What?
Once you have determined the process to be outsourced, identified the inherent risks associated with that process, performed your due diligence, and selected a vendor, it is time to formalize the relationship with a contract – typically a Statement of Work (SOW) – that includes both adequate safeguards and defined performance targets.
Those charged with contract negotiation (typically Legal and/or Procurement) need to be acutely aware of the value you expect the third party to provide to structure an effective contract. To avoid potential conflicts of interest, purchasing managers should not be responsible for negotiating vendor contracts without oversight, as they are often incentivized by operational goals, and less likely to consider the broader enterprise risk landscape.
While most vendor contracts contain defined Service Level Agreements (SLAs) for operational metrics, like timeliness and accuracy, they often don’t include provisions like the mandatory disclosure of system/data breaches, timely communication of relevant audit observations, insurance requirements, periodic reporting on financial viability, etc., leaving organizations in a tough spot when issues stemming from a third-party relationship arise.
How Can I Make Sure My Outsourced Provider Is Meeting Expectations and Minimizing the Inherent Risk to My Organization?
The best way to illustrate this step is to steal from an old cliché: “Treat others how you wish to be treated.” That is, if you want your third parties to share your values and protect the interests of your organization that same way you would, not only is it important to formalize critical details of the relationship in the contract but also to help them understand the business context around the service they provide. The more you treat your third parties like partners rather than vendors, the more likely they are to perform in line with your organization’s values. Mix in a reasonable number of SLAs designed around the identified risks with clearly assigned accountability for monitoring SLA performance, and you will be positioned to identify threats or emerging risks that could impact your organization before they damage your bottom line – or worse – end up as front-page news.
Editor’s note: For additional insights on the topic, download ISACA’s recent white paper on managing third-party risk.
All too often, IT and risk management professionals seem to be speaking a different language—that is, if they even speak at all. Bridging the Digital Risk Gap, the new report jointly authored by RIMS, the risk management society®, and ISACA, promotes understanding, collaboration and communication between these professionals to get the most out of their organizations’ technological investments.
Digital enterprise strategy and execution are emerging as essential horizontal competencies to support business objectives. No longer the sole purview of technical experts, cybersecurity risks and opportunities are now a core component of a business risk portfolio. Strong collaboration between IT and risk management professionals facilitates strategic alignment of resources and promotes the creation of value across an enterprise.
ISACA’s Risk IT Framework acknowledges and integrates the interaction between the two professional groups by embedding IT practices within enterprise risk management, enabling an organization to secure optimal risk-adjusted return. In viewing digital risk through an enterprise lens, organizations can better realize a broader operational impact and spur improvements in decision-making, collaboration and accountability. In order to achieve optimal value, however, risk management should be a part of technology implementation from a project’s outset and throughout its life cycle. By understanding the technology life cycle, IT and risk management professionals can identify the best opportunities for collaboration among themselves and with other important functional roles.
IT and risk management professionals both employ various tools and strategies to help manage risk. Although the methodologies used by the two groups differ, they are generally designed to achieve similar results. Generally, practitioners from both professions start with a baseline of business objectives and the establishment of context to enable the application of risk-based decision making. By integrating frameworks (such as the NIST Cybersecurity framework and the ANSI RA.1 risk assessment standard), roles and assessment methods, IT and risk management professionals can better coordinate their efforts to address threats and create value.
For example, better coordination of risk assessments allows organizations to improve performance by identifying a broader range of risks and potential mitigations, and ensures that operations are proceeding within acceptable risk tolerances. It also provides a clearer, more informed picture of an enterprise’s risks, which can help an organization’s board make IT funding decisions, along with other business investments. Leveraging the respective assessment techniques also leads to more informed underwriting—and thus improves pricing of insurance programs, terms of coverage, products and services.
Overall, developing clear, common language and mutual understanding can serve as a strong bridge to unite the cultures, bring these two areas together and create significant value along the way.
The report is available to RIMS and ISACA members through their respective websites. To download the report, visit RIMS Risk Knowledge library at www.RIMS.org/RiskKnowledge or www.isaca.org/digital-risk-gap. For more information about RIMS and to learn about other RIMS publications, educational opportunities, conferences and resources, visit www.RIMS.org. To learn more about ISACA and its resources, visit www.isaca.org.