Other Blogs
There are no items in this list.
Knowledge & Insights > ISACA Now > Categories
How Company Culture Helps Shape the Risk Landscape

Paul PhillipsIn today’s environment, companies all over the globe are experiencing culture risk. Yes, culture indeed has an impact on risk and every company has a unique culture. The key is to understand it, manage it, and leverage it when possible to obtain competitive advantage. Every company is faced with both positive and negative risk – that is, threats and vulnerabilities that could adversely impact the organization, its reputation and stock value, as well as opportunities that could have a positive impact. While there are many factors that impact the risks that a company faces, many times business leaders overlook and underestimate the impact of company culture.

So, what makes up company culture? Company culture is the character of a company. It sets the tone of the environment in which employees work daily. Company culture includes a variety of elements, including company strategy, mission, vison, value, policies and behaviors. Recently, many major organizations like Google and Microsoft are revamping policies and procedures to address issues such as sexual harassment, racism, and discrimination because of the negative impact these cultural behaviors have had on the overall success of the company. Policies and procedures are tools that can be used to hold individuals accountable for their behavior. The key is ensuring that everyone adheres to the rules. It is also important to visibly reward good behavior and punish bad behavior on a consistent basis.

Once policies and procedures are put in place, it is important to gauge their effectiveness. Are the policies being followed and do they need to be modified in any way? Organizations that are truly committed to the idea will institute monitoring mechanisms to ascertain this information. Oversight and reporting tools that are properly implemented will allow employees at all levels to feel free to report breaches without fear of retribution. The actions of the oversight function to move quickly and consistently on reports will encourage a culture of accountability. The lack of such functions leaves an enterprise at risk of high-turnover, unmotivated employees, and even potential lawsuits. Tools and procedures such as anonymous hotlines, required compliance training, and explicitly stated company values could be viewed as ways to mitigate such risk.

Simply instituting tools, policies, and procedures could be largely ineffective if the organization’s leadership doesn’t first take a long hard look at the current state of affairs. What is the employee demographic (age, gender, educational status, etc.)? Understanding backgrounds and human behavior can be key to having a clear picture of the culture within an enterprise. For instance, studies have shown that millennials view and respond to the world, including the workplace, in a very different way than older professionals. Understanding people helps an organization refine its culture, including the inherent risks associated with it.

There are many factors that typically impact the culture of an organization, including industry regulations, the competitive environment and economic climate. These factors have direct and indirect influence on how people make decisions on a daily basis. Leadership should set clear expectations about what is acceptable behavior in light of these factors. Influencing culture is not easy and can be time-consuming and costly. However, the cost of doing nothing can be even greater.

Third-Party Vendor Selection: If Done Right, It’s a Win-Win

Ryan Abdel-MegeidThe benefits that can be realized from using third parties to support the delivery of products and services are always part of any good sales pitch by prospective vendors. Often these benefits include reductions in operational spend, scalability, improved delivery time, specialized capabilities, and the availability of proprietary tools or software, all of which equate to a competitive advantage for companies leveraging third-party relationships effectively.

Companies recognize and capitalize on these advantages: A study in 2017 of nearly 400 private and public companies reported that two-thirds of those companies have over 5,000 third-party relationships, according to a report released by the Audit Committee Leadership Network. This staggering statistic illustrates how deeply organizations have come to rely on third parties for everything from back-office activities (payroll, help desk, business continuity infrastructure, etc.) to customer-facing roles (call center, sales and distribution, marketing, etc.). But this heavy reliance also elevates third-party risk management from a “nice to have” capability to a business imperative.

While these relationships provide the opportunity for an organization to realize significant benefits, they also introduce a number of potential risks. Before deciding to outsource responsibilities, business leaders must have a broad understanding of their organization’s risk landscape and develop an approach to evaluate the risks introduced by using third parties. Shifting the focus from saving money to creating value is one way companies can start thinking differently about how they manage third parties.

How Do I Know What I Should Outsource?
The most essential step is knowing the value your organization brings to the market.

As an example: If your company is known for developing and distributing high-quality instruments, outsourcing your manufacturing operations is not the best place to start. Issues with that third-party relationship are likely to be customer-facing and impact your hard-earned reputation for precision and quality. Additionally, the skillsets and facilities required to manufacture your product may not be widely available, making your business effectively a hostage of your vendor.

In contrast, if you decide to outsource a function like a payroll, even though poor performance might be an annoyance for employees, it is easily remedied by switching to one of the many alternatives available. There also is no direct customer impact in the short term, so your reputation remains intact.

The most successful outsourcing relationships allow companies to focus on the value they deliver to the market by outsourcing activities that require significant resources or specialized abilities but are outside an organization’s core competencies and not aligned with their long-term strategic vision.

How Should I Perform Due Diligence on Potential Third Parties?
Once you have identified which processes can be outsourced as well as their inherent risks, you can begin performing due diligence on potential vendors. The level of due diligence should be tailored to the significance of the relationship as well as the potential risks it poses. Document your requirements and request prospective vendors to address each item directly, rather than allowing the vendor to give you their boilerplate sales pitch, as they are typically designed to gloss over or avoid known weaknesses.  Make sure you are comfortable with any capability or control gaps and have considered whether internal resources can shoulder the additional burden.

We Have Selected a Third Party to Engage – Now What?
Once you have determined the process to be outsourced, identified the inherent risks associated with that process, performed your due diligence, and selected a vendor, it is time to formalize the relationship with a contract – typically a Statement of Work (SOW) – that includes both adequate safeguards and defined performance targets.

Those charged with contract negotiation (typically Legal and/or Procurement) need to be acutely aware of the value you expect the third party to provide to structure an effective contract. To avoid potential conflicts of interest, purchasing managers should not be responsible for negotiating vendor contracts without oversight, as they are often incentivized by operational goals, and less likely to consider the broader enterprise risk landscape.

While most vendor contracts contain defined Service Level Agreements (SLAs) for operational metrics, like timeliness and accuracy, they often don’t include provisions like the mandatory disclosure of system/data breaches, timely communication of relevant audit observations, insurance requirements, periodic reporting on financial viability, etc., leaving organizations in a tough spot when issues stemming from a third-party relationship arise.

How Can I Make Sure My Outsourced Provider Is Meeting Expectations and Minimizing the Inherent Risk to My Organization?
The best way to illustrate this step is to steal from an old cliché: “Treat others how you wish to be treated.” That is, if you want your third parties to share your values and protect the interests of your organization that same way you would, not only is it important to formalize critical details of the relationship in the contract but also to help them understand the business context around the service they provide. The more you treat your third parties like partners rather than vendors, the more likely they are to perform in line with your organization’s values. Mix in a reasonable number of SLAs designed around the identified risks with clearly assigned accountability for monitoring SLA performance, and you will be positioned to identify threats or emerging risks that could impact your organization before they damage your bottom line – or worse – end up as front-page news.

Editor’s note: For additional insights on the topic, download ISACA’s recent white paper on managing third-party risk.

Improve ROI From Technology By Addressing the Digital Risk Gap

Carol FoxAll too often, IT and risk management professionals seem to be speaking a different language—that is, if they even speak at all. Bridging the Digital Risk Gap, the new report jointly authored by RIMS, the risk management society®, and ISACA, promotes understanding, collaboration and communication between these professionals to get the most out of their organizations’ technological investments.

Digital enterprise strategy and execution are emerging as essential horizontal competencies to support business objectives. No longer the sole purview of technical experts, cybersecurity risks and opportunities are now a core component of a business risk portfolio. Strong collaboration between IT and risk management professionals facilitates strategic alignment of resources and promotes the creation of value across an enterprise.

ISACA’s Risk IT Framework acknowledges and integrates the interaction between the two professional groups by embedding IT practices within enterprise risk management, enabling an organization to secure optimal risk-adjusted return. In viewing digital risk through an enterprise lens, organizations can better realize a broader operational impact and spur improvements in decision-making, collaboration and accountability. In order to achieve optimal value, however, risk management should be a part of technology implementation from a project’s outset and throughout its life cycle. By understanding the technology life cycle, IT and risk management professionals can identify the best opportunities for collaboration among themselves and with other important functional roles.

IT and risk management professionals both employ various tools and strategies to help manage risk. Although the methodologies used by the two groups differ, they are generally designed to achieve similar results. Generally, practitioners from both professions start with a baseline of business objectives and the establishment of context to enable the application of risk-based decision making. By integrating frameworks (such as the NIST Cybersecurity framework and the ANSI RA.1 risk assessment standard), roles and assessment methods, IT and risk management professionals can better coordinate their efforts to address threats and create value.

For example, better coordination of risk assessments allows organizations to improve performance by identifying a broader range of risks and potential mitigations, and ensures that operations are proceeding within acceptable risk tolerances. It also provides a clearer, more informed picture of an enterprise’s risks, which can help an organization’s board make IT funding decisions, along with other business investments. Leveraging the respective assessment techniques also leads to more informed underwriting—and thus improves pricing of insurance programs, terms of coverage, products and services.

Overall, developing clear, common language and mutual understanding can serve as a strong bridge to unite the cultures, bring these two areas together and create significant value along the way.

The report is available to RIMS and ISACA members through their respective websites. To download the report, visit RIMS Risk Knowledge library at www.RIMS.org/RiskKnowledge or www.isaca.org/digital-risk-gap. For more information about RIMS and to learn about other RIMS publications, educational opportunities, conferences and resources, visit www.RIMS.org. To learn more about ISACA and its resources, visit www.isaca.org.

Know Who Your Customers Really Are or Prepare for Trouble

Robert FindlayRecently in the UK, the women’s national football team manager, Phil Neville, called for all social media accounts to be verified and accountable as the result of a spate of racist postings, and asked for a boycott of social media until the situation is addressed. He said that one of his fellow footballers had demanded that people are verified and give passport details and addresses to be held accountable for their postings. As he said, “You can be an egg on Twitter and no one knows who you are.”

Now it’s probably a sorry state of affairs if the footballer is handing out cybersecurity advice to the world of technology practitioners but that’s in fact exactly what has happened. Needless to say, Twitter responded with a typically uncommitted answer where they “will continue to liaise closely with our partners to identify meaningful solutions to this unacceptable behavior.”

So, to be clear, they won’t verify peoples’ identities as that will not suit their business model. Think how many users they will lose if everyone has to upload passport details before tweeting.

This is not a one-off problem. Depending on which report you want to look at, the problem of fake accounts and duplicate accounts is rife. Facebook deleted more than 2 billion fake accounts in the first quarter of the year, between 9 and 15% of active Twitter account may be social bots and a Twitter audit estimates that only 40-60% of Twitter accounts represent real people. It’s even possible for people to fake the verified indicator on LinkedIn.

So, why is this a problem for information security practitioners?

Multiple reasons, really. Fake actors are spreading misinformation about your products, impersonating you and selling counterfeit products, phishing your staff and customers, and putting in links to malware in postings on your social media sites, among many exploits. And when it goes wrong, your organization loses business and gets bad PR. Further, there will be no chance of catching the perpetrator as you don’t know who they are since the social media platform did not have a know-your-customer process.

So, any review you carry out on the use of social media in your organization should be based on the knowledge that no one knows who anyone else is and your marketing people should have processes in place that takes this into account, along with a response plan for when something inevitably goes wrong.

I’ll be presenting on this topic and other social media exploits in my session, “Auditing Social Media and its Cyber Threats,” at EuroCACS/CSX 2019, to take place 16-18 October in Geneva, Switzerland.

Keys to More Effective Vendor Risk Management

Jack FreundCertain industries have a better conceptual understanding of their supply chain than others. For instance, in manufacturing, it’s very clear that raw materials come in one end and out the other comes a completed, processed product for consumption. Those products may get shipped to another manufacturer for integration into their products or off to the consumer for their use. You can link these organizations together and build a map showing the full supply chain network. Indeed, this is often done to help planners, engineers, and managers better understand what their exposure is to hiccups in that chain. For other companies, however, this connection to the full breadth of vendors is more difficult to understand. The work is more evanescent as digital transformation makes work between companies seamless.

In a new ISACA white paper, Managing Third-Party Risk, I wanted to help organizations better understand how to build a third-party or vendor risk management program to better manage their cyber risk posture. When the basic building blocks of these vendor risk technologies and processes are in place, it allows other risk disciplines such as operational risk, privacy risk, country risk, etc., to gain a better handle on their loss exposure as well.

The white paper covers topics in the order in which the vendor process would be executed, starting with a discussion around governance and how foundational it is to have vendor roles clarified, procurement procedures locked down (not just anybody should be able to buy services), data sharing agreements solidified, and the collection of metadata secured (which feeds the next part of the assessment).

The main thrust of the paper is how to assess how much cyber risk a particular vendor poses to your organization. This involves triaging all your vendors and sorting them into buckets, with the riskier buckets meaning more evaluation. For those that need it, I discuss a series of artifacts that you should ask for and tests you should run.

I close with a discussion on what to do with that assessment data. I discuss how to threat model vendors and feed that into your risk assessment, and how to improve upon vendor risk evaluations done with a simple heatmap (such as focusing on the economic impact to the organization using cyber risk quantification). The paper ends with a discussion of ongoing monitoring and what to do with vendors exhibiting bad control posture.

I hope you find this white paper helpful in either establishing a new vendor risk management program or improving the maturity of your existing one. As companies continue transforming their operations with digital technologies, it’s inevitable that an organization will share its data (and its customers’ data) with more and more partners. Let’s be sure that the solutions are in place to help protect that data and engender trust in our digital economy by managing that vendor risk well.

About the author: Jack Freund, Ph.D., CISA, CRISC, CISM, is director, risk science for RiskLens, member of the Certified in Risk and Information Systems Control (CRISC) Certification Working Group, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, IAPP Fellow of Information Privacy, and ISACA’s 2018 John W. Lainhart IV Common Body of Knowledge Award recipient.

Ethical Considerations of Artificial Intelligence

Lisa VillanuevaHave you ever stopped to consider the ethical ramifications of the technology we rely on daily in our businesses and personal lives? The ethics of emerging technology, such as artificial intelligence (AI), was one of many compelling audit and technology topics addressed this week at the 2019 GRC conference.

In tackling this topic in a session titled “Angels or Demons, The Ethical Considerations of Artificial Intelligence,” session presenter Stephen Watson, director of tech risk assurance at AuditOne UK, first used examples to define the different forms of AI. For example, it was initially thought a computer could not beat a human at a game of chess or Go in the early stages of AI. Many were fascinated to find that indeed the computer could be programmed to achieve this goal. This is an example of Narrow or Weak AI where the computer can outperform humans at a specific task.

However, the major AI ethics problem and ensuing discussion largely focused on Artificial General Intelligence (AGI), the intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. Some researchers refer to AGI as “strong AI” or “full AI,” and others reserve “strong AI” for machines capable of experiencing consciousness. The goal of AGI is to mimic the human ability to reason, which could, over time, result in the deployment of technology or robots that achieve a certain level of human consciousness. Questions were posed to the audience such as:

  • Should we make AI that looks and behaves like us and have rudimentary consciousness? Around half (49 percent) of the session attendees polled said no – not because they felt it was immoral or “playing God” but because it would give a false sense that machines are living creatures.
  • Can morality be programmed into AI since it is not objective, timeless or universal and can vary between cultures?
  • Would you want AI-enabled technologies to make life-and-death decision? Take the example of the self-driving car. Should the car be programmed to save the driver or the pedestrian in the unfortunate event of a collision?

In what scenarios would you want the AGI-enabled device to make the decision? Assurance professionals and others have been focused on gaining a better understanding of mechanics of AI and ISACA provides guidance on the role IT auditors can play in the governance and control of AI. However, it became apparent, after this thought-provoking GRC session, that considerations such as the following should also be seriously considered and discussed to ensure ethics and morals in the development and use of AI are not forgotten in the effort to harness this technology:

  • What rules should govern the programmer, and to what extent should the programmer’s experience and moral compass play into how the AGI responds to situations and people?
  • What biases are inherent in the data gathered and upon which the AGI is learning and making decisions?
  • How to evaluate the programs and associated algorithms once the machine has gained the ability of the human to comprehend, such as Blackbox AI?

The session intentionally stayed away from a deep discussion on the mechanics of the technology to foster the dialogue and thinking necessary to reflect on the ramifications, pro or con, of this growing technological capability, its future direction, and its impact on our business and social lives.

Over time, less and less technologies will be considered part of AI because their capabilities will be considered so much a part of our daily life that we won’t even think about it as AI. This was referred to as the “AI Effect.” Let’s not hesitate to ask the tough questions to ensure we are responsible and ethical in our development and use of this amazing technology as it continues to integrate into our daily routines to make our lives easier.

Share your thoughts on the ethics of AGI and other emerging tech in the comments below. We would love to hear from you and see you at the 2020 GRC conference, planned for 17-19 August 2020 in Austin, Texas, USA.

Assessing Public Sector Cyber Risk

Jack FreundThe past decade has seen a significant advance in cyber risk assessment maturity. There has been wide recognition that security and risk frameworks provide excellent process for assessing risk, but miss out on defining exactly how to compute and communicate risk. Increasingly, corporate boards have been asking for quantitative measures of cyber risk, similar to what other disciplines in the organization have been doing for a long time (e.g. measuring financial impact). Instead of being content to continue providing stoplight chart risk reports, CISOs are moving toward providing reports of economic impact of cyber incidents. This move helps support critical board-level and executive decisions regarding capital adequacy and cyber insurance purchases.

This maturation in risk practice was given Gartner’s imprimatur in 2018, when their analysts declared Cyber Risk Quantification (CRQ) as a critical component of integrated risk management. This was a clear indication that the future of cyber risk assessments would be to assess and present it the same as other corporate risk disciplines. Supporting this effort was the FAIR Institute, founded in 2014 and which currently has nearly 6,000 members worldwide, covering about 30 percent of the Fortune 100. The FAIR Institute was founded to promote the de facto CRQ standard, FAIR, which was released by the Open Group.

All this great progress, however, has been primarily focused on the private sector. One notable exception is the US Department of Energy (DOE), which has publicly indicated that it will be using the FAIR standard to conduct CRQ assessments on critical infrastructure, both public and private. Others in the public-sector service can find comfort in the DOE’s trailblazing example. One key threshold that a lot of public-sector organizations struggle with when adopting CRQ is the notion of expressing cybersecurity as financial risk. In many ways, it appears at first blush to be anathema to public sector service; their work is truly service to a broad population. Profit and loss are foreign concepts in that realm. In many cases, such public-sector work is literally done to save lives, and after all, how can we put a dollar value on that?

As it turns out, accounting for human life in the process of decision analysis has long been a common practice in social and political sciences. This concept is called “value of a statistical life,” or VSL. It’s been in use for some time by the very public-sector agencies that are in need of help assessing cyber risk in a quantitative way: the US Department of Transportation, FDA, EPA, and various public health plans. These values have been placed as low as $129,000 per year of life to as much as $9.6 million per life. Such values are used to provide a richer tapestry of information for decision-makers as they allocate limited resources. It does not serve in any way to cheapen life or any of these organizations’ missions. Instead, it helps these organizations accurately evaluate public policies to budget investments based on anticipated outcomes. It’s no different for cybersecurity.

Once an organization is able to vault over the inertia of not wanting to quantify these values, they can quickly see improvements in their organizational risk assessments. For public-sector organizations, this can manifest itself in stark contrast to existing methods. Consider the difference in cyber risk assessment or cybersecurity strategy discussions between a work product that essentially says “this is high risk therefore we need to do it,” versus “not fixing this deficiency/investing in this new capability will expose our constituency to $5 to $10 million of economic impact annually.” It becomes a far more compelling and persuasive conversation to be able to articulate and defend your assertions. So, too, does it place the appropriate level of accountability on the decision-makers to formally accept the risk associated with their decisions.

Accounting for these values in your public-sector CRQ assessment using the FAIR standard can be done by considering the broader economic impact of a cyber incident. Instead of thinking about an availability incident affecting sales, consider a municipal services availability problem. State and local governments in the United States are increasingly becoming the target of ransomware and, in cases such as the city of Baltimore, we are seeing problems with water and other critical services. As the city recovers, customers are getting large water bills to make up for months of the city being unable to run accounting and billing processes. Further, the city has been unable to collect the revenue incrementally, endangering its ability to fund this critical infrastructure.

These kinds of events can be straightforward to foresee and, as a result, straightforward to account for the economic impact. If a critical public-sector service is unavailable, then what are the impacts to the community serviced by them? Can businesses operate without it? What is the impact on tax revenue if the power goes out and commerce is unable to be conducted? How many people will be unable to work as a result of public transportation being unavailable? How does this impact the most vulnerable in the community, who often have little economic cushion to fall back on during crises? Accounting for the number of people affected by estimates of how many people will be displaced, lose their housing, be unable to purchase critical medications and food, etc. is the right way to think about CRQ economic and financial impact for public-sector concerns. The same is true for confidentiality losses: how will a breach of local taxpayer information affect the citizens you serve? What kinds of economic activity will it hinder, how many hours of their time will be spent rectifying fraudulent events, and what is the economic impact of a loss of privacy?

These kinds of questions and more can be the basis for assessing quantitative cyber risk impact scenarios for public-sector organizations that plan to use the FAIR standard. FAIR advocates using the accounting process called activity-based costing (ABC) to think about all the costs incurred by various parties as events plays out in the lives of those affected. This will give the organization a sense (using ranges of impact representing least, most, and most-likely results) of where priority for a particular service lies. When we consider how much citizens rely on their government’s providing basic services and critical infrastructure, it is imperative that we endeavor to accurately reflect the economic impact of the failure of these services not just on the for-profit industry, but on the underserved and vulnerable in the community who need these services the most. Not providing accurate valuations of the impact on human life will result in a misallocation of resources at best, and unnecessary loss of life at worst.

About the author: Jack Freund, Ph.D., CISA, CRISC, CISM, is director, risk science for RiskLens, member of the Certified in Risk and Information Systems Control (CRISC) Certification Working Group, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, IAPP Fellow of Information Privacy, and ISACA’s 2018 John W. Lainhart IV Common Body of Knowledge Award recipient.

How to Approach Mitigating Third-Party Risk

Jan AnisimowiczVendor management comprises all processes required to manage third-party vendors that deliver services and products to organizations. Significant effort is required from both the enterprise and the vendor to maximize the benefits received from the service and/or product while simultaneously mitigating associated risks.

Keeping in mind the increasing scale, scope of services and complexity of these vendor services, the related risks and importance of effective vendor management also proportionately increase. For example, in GDPR, if the data processor does not follow some organizational compliance requirements and a data breach happens, the organization will face the risk of paying severe fines.

Third parties seem to be one of the weakest links in enterprises’ security policy. Every day, cyber-related incidents such as data breaches occur, leading to serious incidents that can have significant impact on the enterprise. As a result, organizations have devoted more and more resources to vendor risk management, but this is mainly a manual process. Despite vendor risk mitigation being crucial for each organization, most of the enterprises still know almost nothing about their vendors.

While talking to representatives of dozens of organizations that use third-party services, I noticed that they often initially underplayed the importance of this topic. It was only after we had discussed key issues related to the tasks performed and services rendered by third parties, as well as their implications, that my interlocutors began to notice the true weight of the issue at hand. Usually the most important questions are: How do we start the process? What are the initial steps?

Below are my recommended tips that could support your initial activities in the vendor risk management process:

Compile a List of All Your Vendors
Commonly, the main obstacle is limited knowledge of your providers, especially the smaller ones that provide goods or services of lesser monetary value or to a narrow business niche. However, it is my opinion that an organization should have a thorough knowledge of all its business partners, in all areas of their operation. Moreover, this information ought to be kept in a single database. At this point, I would like to point out a common challenge for large organizations. It is highly possible that some vendors could be managed by “shadow” engagements, not included in the official database. Knowledge of these is important to ensure the risks are addressed. “Shadow” vendor management initiatives could significantly limit the single version of the truth about the vendors and could create severe risks that are complicated to mitigate. To avoid (or at least to reduce) this kind of problem, organizations have to focus on a formal vendor management process with strong support from the C-suite.

Create a List of Services You Consider Relevant to Your Organization
When the consolidated list of vendors is ready, it is recommended to create list of services that they deliver. This list ought to include the entire area where your organization receives support from outside contractors. Each service should be accompanied by an indicator of its significance to your operations (a finite numerical scale or a set of quality descriptors are recommended). Rating the business importance of all services will allow you to make each vendor’s risk profile more precise. Knowing who performs the services for your organization is an important step in limiting the risk of potential data leakages.

Editor’s note: Anisimowicz will present further insights on this topic at the 2019 GRC Conference, to take place 12-14 August in Ft. Lauderdale, Florida, USA.

NIST Risk Management Framework: What You Should Know

Baan AlsinawiIn late December 2018, NIST published a second revision of SP800-37, Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. The revised publication addresses an updated Risk Management Framework (RMF) for information systems, organizations, and individuals, in response to Executive Order 13800 and OMB Circular A-130 regarding the integration of privacy into the RMF process.

Now that the dust has settled, we are taking another look at the update. If achieved as intended, these objectives tie C-level execs more closely to operations and significantly reduce the information technology footprint and attack surface of organizations. They also promote IT modernization objectives, and prioritize security and privacy activities to focus protection strategies on the most critical assets and systems. It also more closely incorporates supply chain risk management into the framework.

A Closer Look At The Updates
This version of the publication addresses how organizations can assess and manage risks to their data and systems by focusing on protecting the personal information of individuals. Information security and privacy programs share responsibility for managing risks from unauthorized system activities or behaviors, making their goals complementary and coordination essential. The second revision of the RMF now ties the risk framework more closely to the NIST Cybersecurity Framework (CSF). The update provides cross-references so that organizations using the RMF can see where and how the CSF aligns with the current steps in the RMF.

It also introduces an additional preparation step, addressing key organizational and system-level activities. On the organization level, these activities include assigning key roles, establishing a risk management strategy, identifying key stakeholders, and understanding threats to information systems and organizations. System level preparation activities include identifying stakeholders relevant to the system; determining the types of information processed, stored, and transmitted by the system; conducting a system risk assessment; and identifying security and privacy requirements applicable to the system and its environment.

Preparation can achieve efficient and cost-effective execution of risk management processes. The primary objectives of organization level and system level preparation are to:

  • Facilitate better communication between senior leaders and executives in the C-suite, and system owners and operators.
  • Align organizational priorities with resource allocation and prioritization at the system level
  • Convey acceptable limits regarding the selection and implementation of controls within the established organizational risk tolerance
  • Promote organization-wide identification of common controls and the development of tailored control baselines, to reduce the workload on individual system owners and the cost of system development and protection
  • Reduce the complexity of the IT infrastructure by consolidating, standardizing, and optimizing systems, applications, and services through the application of enterprise architecture concepts and models
  • Identify, prioritize, and focus resources on high-value assets and high-impact systems that require increased levels of protection
  • Facilitate readiness for system-specific tasks

The incorporation of supply chain risk management (SCRM) is another important theme addressed in the publication. Specifically, organizations must ensure that security and privacy requirements for external providers, including the controls for systems processing, storing, or transmitting federal information, must be delineated in contracts or other formal agreements. It is ultimately the responsibility of the organization and authorizing official to respond to risks resulting from the use of products, systems, and services from external providers.

Finally, SP800-37 Rev. 2 supports security and privacy safeguards from NIST’s Special Publication 800-53 Revision 5. The updated RMF document states that the revision 5 separates the control catalog from the control baselines that have been included historically in that publication. A new companion publication, NIST Special Publication 800-53B, Control Baselines and Tailoring Guidance for Federal Information Systems and Organizations, defines the recommended baselines.

In other changes to the RMF, Appendix F System and Common Control Authorizations now includes Authorization to Use (ATU) as an authorization decision applied to cloud and shared systems, services, and applications. It would be employed when an organization chooses to accept the information in an existing authorization package generated by another organization. Page 123 notes, “An authorization to use requires the customer organization to review the authorization package from the provider organization as the fundamental basis for determining risk… An authorization to use provides opportunities for significant cost savings and avoids a potentially costly and time-consuming authorization process by the customer organization.” Additionally, the appendix  addresses a facility authorization, allowing systems residing within a defined environment to inherit the common controls and the affected system security and privacy plans.

Summing It Up
SP-800-37 promotes the integration of the agency’s privacy program into the RMF, allowing the organization to produce risk-related information on both the security and privacy posture of organizational systems and the mission/business processes supported by those systems. It also connects senior leaders to operations to better prepare for RMF execution, providing closer linkage and communication between the risk management processes and activities at the C-suite or governance level of the organization and the individuals, processes, and activities at the system and operational levels of the organization. All in all, these are much-welcome changes to the framework, as better integration means tighter and more efficient controls that ensure assets are properly safeguarded by private and public sector organizations.

Author's note: Baan Alsinawi, president and founder of integrated risk management firm TalaTek, has more than two decades of experience in information technology (IT). She is a member of ISC2 and is CISSP and ITIL certified.

Continuous Security Validation

Berk AlganNo corporate executive should feel secure.

Every day, we keep hearing about yet another company getting hacked or losing sensitive data. Many enterprises do not even realize their systems are compromised until they receive an unexpected notification from an external party. Cybersecurity remains a top risk for companies and a hot topic for boardrooms.

To fend off cyber threats, most companies focus on:

  • Hiring security professionals or third parties with expertise in various security domains
  • Establishing processes such as patch management and asset management
  • Implementing various security tools and monitoring devices
  • Creating control libraries in alignment with regulations and industry standards
  • Establishing security training and awareness programs

But, how do we know our cyber defenses actually work?

Traditional Security Validation includes testing individual controls or a set of controls to ensure that they are designed appropriately and working effectively. For example:

  • Validating that a firewall is configured according to a company’s configuration standards is considered testing of a singular control.
  • Testing a set of relevant controls to verify whether the company is in compliance with the Payment Card Industry Data Security Standard (PCI-DSS) would be considered testing a set of controls.

While testing security controls in a traditional way could serve its intended purposes, the company should not feel secure solely based on traditional point-in-time control testing. The reality is that threats and an organization’s systems change on a daily basis, and a traditional control test that was effective yesterday may no longer be effective in mitigating a threat today.

Adversaries will always look for any weakness in a company’s environment, ranging from misconfigured systems to overly permissive access rules. New threats, vulnerabilities and zero-days are identified every day.

The only effective way to combat this is to think and act like an adversary.

Continuous Security Validation allows an organization to take cyber attackers’ perspective and stress-test its security stance.

While it includes elements of traditional validation methods described above, it focuses more on walking in hackers’ shoes. The chart below depicts key characteristics of Continuous Security Validation:

To implement and execute on Continuous Security Validation, a company could leverage industry best practices. A leading framework in this area is MITRE ATT&CK™ for Enterprise (ATT&CK).

ATT&CK for Enterprise is a framework that takes the perspective of an adversary trying to hack into a company using various known attack vectors. This framework provides a library of real-world hacking activities for companies to simulate in their own networking environment.

In its simplest form, an organization could pick a relevant attack vector (e.g. exfiltration over alternative protocol) from the ATT&CK Matrix and test its cyber defenses to validate that it could withstand that particular attack. They can then review and prioritize mitigation of identified gaps.

It’s important to note that internal red-teaming (an internal group taking hackers’ perspective) is a core component of this approach whereby these teams can use real scenarios and test the actual response and detection capabilities rather than just testing controls.

Continuous Security Validation will help a company: 

  • Increase its cyber resiliency by frequent testing and validation
  • Test the effectiveness of its security controls and tools in preventing specific attack vectors
  • Develop an organizational cyber threat model to focus on higher risk areas and key information assets
  • Methodically analyze identified security observations

At the 2019 GRC Conference in Fort Lauderdale, Florida, USA, to take place 12-14 August, I will further explore Continuous Security Validation and describe how a company could use it to reduce its cyber exposure. We will also review key elements of ATT&CK for Enterprise and discuss how it can be leveraged to stand up and operate a Continuous Security Validation process.

About the author: Berk Algan is a risk management executive who takes pride in building exceptional Governance, Risk and Compliance (GRC) functions and developing high-performing teams. He currently leads the Technology & Security Risk Management group at Silicon Valley Bank.

1 - 10 Next