Introduction

It is no secret that many enterprises have struggled to derive measurable value from their threat intelligence programs. In a recent study commissioned by Google Cloud, which surveyed more than 1,500 IT and cybersecurity professionals, 61% reported being overwhelmed by the number of threat intelligence feeds, 59% cited challenges in making intelligence actionable, and 59% struggled to verify the validity or relevance of threats.1 These challenges result in millions of dollars in wasted resources and leave enterprises exposed to preventable risk.

In a recent study commissioned by Google Cloud, which surveyed more than 1,500 IT and cybersecurity professionals, 61% reported being overwhelmed by the number of threat intelligence feeds, 59% cited challenges in making intelligence actionable, and 59% struggled to verify the validity or relevance of threats.

This white paper delivers a practical blueprint with specific recommendations that risk management, security, and IT executives can employ to build and refine threat intelligence programs resulting in measurable impacts. Now is the right time to prioritize a threat intelligence program, as evidenced by an array of pressing issues:

  • The emergence of novel threat vectors, such as infostealer malware, has led to millions of breached identities and created new risk for enterprise employees and executives.
  • The escalation of ransomware tactics against enterprises, intensified by a volatile geopolitical enFvironment, is rapidly reshaping the threat landscape.
  • The rapid adoption of artificial intelligence (AI) solutions introduces both risk and opportunity for enterprise security programs.
  • The rapid commodification of the cybercrime ecosystem is leading to substantially scaled attacks against enterprises.

Threat intelligence, threat modeling, and enterprise risk management are inherently interconnected disciplines within a mature cybersecurity program. Threat modeling is a structured, proactive process for identifying, assessing, and mitigating potential security risk before it can be exploited. It requires viewing systems through an adversarial lens to uncover vulnerabilities, evaluate the likelihood and potential impact of an attack, and design targeted defensive controls.

Threat intelligence complements this process by collecting, analyzing, and applying relevant, actionable intelligence to inform threat models and strengthen the enterprise’s overall security posture. Together, threat modeling and threat intelligence underpin an effective enterprise risk management program—one that continuously aligns technical risk mitigation with business objectives and reduces exposure to an acceptable residual level.

Together, threat modeling and threat intelligence underpin an effective enterprise risk management program—one that continuously aligns technical risk mitigation with business objectives and reduces exposure to an acceptable residual level.

The Threat Landscape and Evolving Cybercrime Ecosystem

One of the key drivers of cyberthreat intelligence is the rapidly evolving nature of the cybercrime ecosystem. Building an effective threat intelligence program is inseparable from the frenetic pace of change as the business of cybercrime matures. Cybersecurity Ventures estimates that cybercrime will cost the global economy US$10.5 trillion in 2025.2 New groups, tactics, and malware variants appear—and with equal speed disappear. In many ways, cybercrime now functions as an intricate market, complete with complex supply chains, role specialization, and economies of scale.

One of the key drivers of cyberthreat intelligence is the rapidly evolving nature of the cybercrime ecosystem.

For these reasons, combating cybercrime requires understanding a complicated economy with enormous real-world consequences. Take the LockBit ransomware group as an example. At first glance, a naïve observer might assume it consists of a few self-sufficient individuals systematically compromising companies. But in fact, LockBit’s success has been made possible only through the multifaceted supply chains that have developed over the past decade. Infostealer malware and vulnerability exploitation provided initial access brokers (IABs) with scalable ways to compromise enterprise environments and resell that access to ransomware affiliates. These affiliates—effectively independent operators working on behalf of the group—created the scale necessary to carry out thousands of successful attacks that have resulted in hundreds of millions of dollars in payments.3

Understanding the elements of this ecosystem, the operational methods by which attacks are carried out, and—most importantly—the points at which they can be disrupted is a core task of effective threat intelligence teams. It also represents one of the greatest opportunities for organizations to materially reduce risk.

Infostealer Malware

Infostealer malware is a type of malware that infects a computer and steals credentials and other highly sensitive information saved in browsers, session cookies, browser history, and autofill data. This is then exported to command-and-control infrastructure controlled by the threat actor. The results of this infection are a “stealer log,” which contains all the device information belonging to a single user.

In the past five years, infostealers have risen to prominence in the cybercrime ecosystem and become a major issue for enterprise security teams.

In the past five years, infostealers have risen to prominence in the cybercrime ecosystem and become a major issue for enterprise security teams.

Infostealers are typically introduced when an unsuspecting user clicks a malicious ad and accidentally infects themselves. In many cases, users save corporate credentials on their home devices or infect their corporate devices, resulting in the theft of dozens of credentials that provide direct access to enterprise IT infrastructure, HR portals, financial portals, and other sensitive platforms.

Threat actors do not just use stealer logs—in most cases, they sell them. The stealer log market is lucrative, with millions of logs being sold annually on dark web “auto shops,” such as Russian Market, and in private instant messaging channels.

Threat actors do not just use stealer logs—in most cases, they sell them. The stealer log market is lucrative, with millions of logs being sold annually on dark web “auto shops,” such as Russian Market, and in private instant messaging channels.

According to Verizon’s 2025 Data Breach Investigation Report, 30% of stealer logs are obtained from enterprise-licensed environments.4 When extrapolated across tens of millions of stealer logs, this creates a massive security blind spot. Several major incidents in 2023 and 2024 highlighted the role of infostealer malware. For example, investigations into the 2024 attacks against Snowflake customer environments found that criminals leveraged stolen credentials—many of which were originally harvested by infostealers on employee endpoints and then purchased on criminal marketplaces.5

Infostealer malware remains one of the most prominent types of exposure for modern enterprise environments, yet many enterprises have no effective monitoring in place for it. Security organizations that have invested in threat intelligence are able to identify the rising threat and begin adopting new controls, technologies, and processes to mitigate the risk. However, organizations without mature threat intelligence programs miss critical insights that could improve controls to prevent, detect, and respond to infostealer malware infections.

Infostealer malware remains one of the most prominent types of exposure for modern enterprise environments, yet many enterprises have no effective monitoring in place for it.

Best Practices for Effective Threat Intelligence

The rise of infostealer malware illustrates the importance of building threat intelligence teams that can rapidly identify and respond to changes in the threat landscape. Ensuring a threat intelligence program is effective requires a cohesive approach firmly founded on the organization’s threat model. This approach involves creating highly specific priority intelligence requirements (PIRs), mapping types of threat intelligence to key business outcomes, consulting stakeholders, and then operationalizing this intelligence.

Priority Intelligence Requirements

PIRs are the linchpin of actionable and effective threat intelligence. Well-crafted PIRs require careful thought and should be grounded in the enterprise’s unique threat model, risk management strategy, and overarching business objectives. Developing and refining PIRs is not a one-time exercise; it is an ongoing process that requires deliberate engagement with stakeholders across the enterprise. PIRs should be established across all four categories of threat intelligence: strategic, tactical, operational, and technical.

Effective PIRs share several characteristics, including:

  • Specificity—Vague requests like "monitor ransomware threats" provide little direction. Specific PIRs ask precise questions, such as, “What social engineering techniques are threat actors using to bypass help desk authentication procedures in similar companies?”
  • Actionability—Each PIR should lead to concrete decisions or actions. Intelligence that cannot inform a decision is an academic exercise, not an operational necessity.
  • Measurability—PIRs should enable the organization to track progress and assess the effectiveness of mitigation efforts.
  • Time-boundedness—PIRs should specify frequency and timeliness expectations.

A threat model that accurately prioritizes threats to the organization can help business and risk leaders establish effective PIRs that are aligned with risk objectives. As an example:

  1. A cryptocurrency exchange identifies that a compromise of key employee accounts could lead to a major breach and loss of millions of dollars.
  2. The enterprise establishes a PIR around identifying the most common techniques used by threat actors to compromise accounts.
  3. The threat intelligence team identifies that threat actor groups such as Scattered Spider are increasingly using socially engineered help desks to take over executive accounts.
  4. The cryptocurrency exchange increases help desk training for all IT support employees and creates an escalation pathway for suspicious cases.
  5. The exchange detects and prevents two social engineering attacks targeting their help desk in the following six-month period.

Mapping Types of Threat Intelligence to Business Requirements

Cyberthreat intelligence comes in many forms and is based on an enterprise’s risk, threat model, and unique skillsets and specialization. There are four types of threat intelligence: strategic, tactical, operational, and technical. Categorizing all intelligence across the four types and then mapping each type to specific business and risk requirements can help in managing a threat intelligence program.

Strategic Threat Intelligence

Strategic threat intelligence is high-level analysis designed to provide enterprise leaders with insight into the overarching threat landscape. It examines geopolitical, economic, regulatory, and industry-specific factors that may shape the likelihood, impact, or evolution of cyberthreats. The goal is to inform long-term business strategy, risk management priorities, and investment decisions.

Example:
A threat intelligence team prepares a one-to-two-page brief for a financial services board of directors that analyzes how North Korean money-laundering tactics are targeting the sector and outlines potential countermeasures.

Tactical Threat Intelligence

Tactical threat intelligence translates adversary tactics, techniques, and procedures (TTPs) into near-term defensive action. It evaluates the specific techniques most likely to affect the enterprise and guides concrete changes to controls, processes, and workforce behavior to measurably reduce risk.

Example:
After identifying a rise in infostealer malware on employees’ personal devices that captures stored corporate credentials, the enterprise discontinues bring your own device (BYOD) access and shortens session-cookie time to live (TTL) in its identity and access management (IAM) platform to limit token replay and lateral movement.

Operational Threat Intelligence

Unlike tactical intelligence, which evaluates broad external TTPs and their implications for control design, operational intelligence draws on data directly tied to a specific organization’s assets, workforce, and infrastructure. It enables defenders to take immediate, concrete actions in response to threats targeting their environment.

Example:
A mid-sized hospital integrates a dark web monitoring subscription into its security operations. When compromised employee credentials appear in criminal marketplaces, the intelligence feed triggers automatic password resets across affected accounts, reducing the risk of unauthorized access.

Technical Threat Intelligence

Technical threat intelligence consists of machine-readable or near-machine-readable artifacts that defenders can plug into tools to enrich telemetry, guide hunts, and automatically block malicious activity. It most often includes indicators of compromise (IoCs), such as file hashes, malicious domains, IP addresses, and URLs, and detection content like YARA or Sigma rules and ATT&CK mappings. Technical intelligence typically feeds security information and event management (SIEM) and security orchestration, automation, and response (SOAR) systems, as well as endpoint detection and response (EDR), web gateways, email security, and firewalls.

Example:
A threat hunting team ingests a vetted IoC set attributed to the ShinyHunters threat group into the SIEM and EDR. They run targeted queries and deploy temporary blocks on matching domains and hashes, then pivot on any hits to scope impact, isolate endpoints, and trigger credential resets where needed.

Engaging Stakeholders Across the Enterprise

To build an effective threat intelligence program, the broader business context needs to be considered. For example, an organization planning a strategic expansion, or planning to adopt new technology such as AI, may require highly specific PIRs that align with the specific enterprise risk. However, a cybersecurity team in a mid-to-large enterprise may not be fully informed of key strategic decisions made by stakeholders. Engaging stakeholders while developing PIRs and implementing the threat intelligence program is critical to success.

Engaging stakeholders while developing PIRs and implementing the threat intelligence program is critical to success.

Case Study

A fictitious bank with an established threat intelligence program is planning to expand into Southeast Asia. The three-year expansion plan centers on providing a financial services and investment application in Thailand, Vietnam, and Malaysia. To support this, the organization wants to define strategic, tactical, operational, and technical PIRs for the threat intelligence team.

Strategic Intelligence PIRs

Identify cyber-related regulatory and compliance risk.

  • Stakeholders—Board of directors, governance, risk, and compliance team, and legal
  • Success criteriaCompliance and regulatory landscape mapped out based on specific application features; defined process for identifying upcoming regulatory requirements in Thailand, Vietnam, and Malaysia that would affect the bank
  • Key performance indicator (KPI)—All cyber-related regulatory and compliance requirements documented in a searchable spreadsheet with key required controls mapped to each other

Tactical Intelligence PIRs

Identify TTPs associated with account takeover attacks against financial services applications in Southeast Asia and provide potential countermeasures. Identify if any unique tactics are employed to compromise users (compared to trends in North America).

  • Stakeholders—Fraud prevention, IAM, and security operations
  • Success criteria—A repository of known TTPs and associated countermeasures to prevent account takeover attacks against financial services consumers
  • KPI—Minimum of 30 documented TTPs related to account takeover attacks

Operational Intelligence PIRs

Identify a threat intelligence provider that can provide leaked session cookies and credentials to automate password resets and rotate sessions for users infected with infostealer malware.

  • Stakeholders—Fraud prevention, IAM, and legal and compliance
  • Success criteria—Successful automation and integration of exposed session cookies and credentials to prevent account takeover attacks in concert with the IAM team
  • KPI—59% reduction in account takeover attacks

Technical Intelligence PIRs

Identify a region- or country-specific IoC feed that can be cross-matched against customer sessions to identify potential account takeover attacks.

  • Stakeholders—IAM, security operations, and fraud
  • Success criteria—Implementation of IoC feeds that can be adopted to flag accounts at risk of fraudulent transactions
  • KPI—Average age of Indicator of Compromise (IoC) in feed

Setting PIRs does not just involve mitigating cyberrisk. Mature threat intelligence programs can also be used to support broader business objectives. In this case, the bank wants to expand into several Southeast Asian countries (and has therefore built their PIRs around mitigating risk associated with account takeover attacks). Building PIRs in conjunction with the business benefits the threat intelligence team as well, as it aligns their work directly with the key objectives of the organization and creates a tangible return on investment (ROI). To achieve this, security teams cannot operate on an island away from the broader business, and threat intelligence teams need to be closely integrated with security operations, incident response, threat hunting, and red teams to create value.

To achieve this, security teams cannot operate on an island away from the broader business, and threat intelligence teams need to be closely integrated with security operations, incident response, threat hunting, and red teams to create value.

Communicating Up and Down the Leadership Chain

Communicating threat intelligence to different stakeholders—with different levels of technical understanding of the business risk—is one of the most fundamental challenges for a threat intelligence team.

Board-Level Communication

Boards of directors are primarily concerned with enterprise governance, regulatory changes, and strategic challenges. Threat intelligence should be high level and contextual, with minimal technical detail.

When preparing content for the board, it is useful to ask the following questions:

  • Does this piece enable or contextualize strategic decision making effectively?
  • Is the level of detail appropriate to support informed decisions?
  • Can any information be removed or simplified for greater clarity?

Example:
The threat intelligence team delivers a semiannual summary and presentation of the most significant threat vectors to the organization, along with the organization’s risk profile, critical mitigations, and key changes in the threat and regulatory landscape.

Nontechnical C-Suite and Executive Level

Like boards, chief executive officers (CEOs) and other C-suite executives seek strategic guidance, but they also expect more operational context that connects threats directly to the business. At this level, threat intelligence should link security developments to enterprise priorities such as growth, market expansion, customer trust, and regulatory exposure. The goal is to enable executives to see how evolving threats may affect strategy, investment, and brand reputation, while avoiding unnecessary technical depth.

When preparing content for the C-suite, it is useful to ask:

  • Does this intelligence explain how current threats could affect revenue, customer confidence, or market positioning?
  • Are risk scenarios expressed in terms of likelihood and potential business impact rather than technical jargon?
  • Is there a clear set of recommended options or decisions for executives to consider?

Example:
The threat intelligence team provides a quarterly executive briefing that highlights trends in ransomware targeting the sector, outlines potential financial and reputational implications, and presents investment options for enhancing resilience in line with the enterprise’s threat model.

Director and Technical Manager Level

Preparing threat intelligence aimed at a director or technical manager audience is often where threat intelligence teams will feel most comfortable. These reports should be detailed, fully contextualized, and include original source material where possible. They should also include specific recommendations for security controls grounded in the most recent TTPs, information about breaches, and technical indicators of significant risk.

Example:
A threat intelligence team prepares a weekly report for the vulnerability management team on recent vulnerability disclosures and top vulnerabilities being targeted by ransomware groups, along with recommendations for prioritization.

Establishing Confidence Intervals and Probabilities

When preparing any type of threat intelligence, it can be useful to borrow best practices from the US intelligence community, which is widely considered the gold standard for intelligence collection, analysis, and dissemination.6 One technique that the US intelligence community commonly uses for analysis is explicit confidence intervals.7 These intervals include:

  • High confidence—The judgment is well supported by technical intelligence, well sourced, and likely to be correct.
  • Moderate confidence—Information is credibly sourced but may contain gaps in sourcing or credibility issues. Other interpretations should be considered.
  • Low confidence—The analyst has information from a single unreliable or partially reliable source. It should be treated as a possibility.

This technique can yield significantly clearer communication of probability and confidence, particularly in settings that do not require providing operational detail.

Scaling a Threat Intelligence Program

Scaling a threat intelligence program is an often overlooked, but important, challenge for many organizations. Many threat intelligence programs are built on manual processes and rely on a high degree of subject matter expertise and knowledge but lack defined processes.

Many threat intelligence programs are built on manual processes and rely on a high degree of subject matter expertise and knowledge but lack defined processes.

As an enterprise matures their threat intelligence process, they must ensure they are creating the right conditions for scalability. PIRs must also be continuously updated based on both changes to the organization’s threat model and the strategic approach to the market.

Legal and Regulatory Considerations

Threat intelligence often involves manual infiltration of dark web forums, marketplaces, stolen data, and other sensitive areas that could create legal risk for the business and individual employees. When building or maturing a threat intelligence program, close alignment with legal counsel is pivotal, as failure to meet applicable privacy, compliance, and criminal laws can result in a range of penalties.

To accomplish this alignment, enterprises should consider:

  • If the need arises, engaging corporate legal counsel early and often to ensure all local and regional laws, regulations, and intelligence requirements are adhered to
  • Adhering to all applicable data privacy regulations
  • Consulting specific guidance on how to collect, analyze, and securely disseminate threat intelligence without violating criminal and civil statutes, such as the US Department of Justice’s Legal Considerations when Gathering Online Cyber Threat Intelligence and Purchasing Data from Illicit Sources8

In many cases, threat intelligence teams build direct relationships with federal law enforcement bodies, particularly when they may need to make frequent criminal reports or have unique information that can help investigative entities.

In many cases, threat intelligence teams build direct relationships with federal law enforcement bodies, particularly when they may need to make frequent criminal reports or have unique information that can help investigative entities.

Improving the Operational Impact of Threat Intelligence

When determining how to operationalize a threat intelligence program and deliver real-world value, organizations should consider their technology stack, tool selection, and opportunities for automation.

Technology Stack and Tool Selection

Choosing the correct technology to enable threat intelligence can be daunting. Enterprises with a high degree of sophistication often purchase multiple platforms to reduce the chances of missing a critical event, whereas those with lower security budgets often purchase a single platform to optimize costs and then must work with that product’s limited capabilities or features. When selecting a threat intelligence platform, organizations should:

  • Build a list of requirements based on PIRs—PIRs and organizational maturity play a significant role in selecting a threat intelligence platform. When reviewing PIRs, identify unmet technology needs to begin drafting the list of requirements. Share the list with a potential vendor early in the sales process to avoid wasting time on platforms that will not meet the requirements.
  • Consult with a range of teams—Teams within security, fraud, or governance, risk, and compliance may have unmet needs. Confer with these teams to identify technical requirements from other parts of the organization.
  • Evaluate and conduct extensive proof of concepts—Once a short list of potential vendors that meet some or all technical requirements is created, begin conducting proof-of-concept deployments. Evaluate the vendor against technical requirements, how easy they are to work with, and how well they respond to requests.
  • Integrate deeply—Once a vendor is selected, the next step is to integrate the platform with the team and operational processes. Threat intelligence platforms often provide application programming interfaces (APIs) and automation playbooks for multiple teams. Build automations, processes, and workflows to leverage specific features and maximize the value derived.

Organizations may wish to use a template when evaluating potential platform vendors. As an example, the following categories and weights may be tailored to the needs of the enterprise:

  • Depth of Coverage (25%)—Visibility across the dark web, criminal forums, malware/infostealer telemetry, brand monitoring, and industry/geographic relevance
  • Data Quality and Relevance (20%)—Accuracy, timeliness, enrichment (TTP mapping, attribution), and false positive rates
  • Integration and Automation (15%)—API maturity, SIEM/SOAR/IAM/EDR integrations, and playbooks for automated response
  • Analytic Value (15%)—Curated reporting, contextual advisories, and links to PIRs and enterprise threat models
  • Governance and Compliance (10%)—Legal and ethical sourcing, privacy compliance, and auditability
  • Commercials and Support (15%)—Cost versus value, onboarding experience, analyst support, and customer success maturity

Automation as a Central Component of Threat Intelligence

Manual approaches to threat intelligence scale linearly, but modern threat intelligence teams need to scale exponentially. As a result, automating threat intelligence is perhaps the core piece of an effective program. There has been an explosion of data as the cybercrime economy continues to mature, and cybercrime communities expand onto Telegram, Signal, and other messaging applications. Automated approaches, including prioritization models, AI-enabled initial access broker analysis, and IoC feeds for threat hunting, can build on an already successful program to improve maturity and reduce the mean time to detection (MTTD) and mean time to response (MTTR).

There has been an explosion of data as the cybercrime economy continues to mature, and cybercrime communities expand onto Telegram, Signal, and other messaging applications. Automated approaches, including prioritization models, AI-enabled initial access broker analysis, and IoC feeds for threat hunting, can build on an already successful program to improve maturity and reduce the mean time to detection (MTTD) and mean time to response (MTTR).

AI is both the most promising accelerator of threat intelligence automation and a new class of risk. Machine learning (ML) and generative models can surface weak signals from noisy streams, cluster related activity, and contextualize observations at scale; they also introduce novel attack surfaces (data poisoning, model theft, prompt injection), compliance and privacy exposure, and governance debt if deployed haphazardly. Integrating AI into a threat intelligence program therefore demands a cross-functional operating model with clear decision rights and controls.

Parsing of Breached Identities for Prioritization

For large enterprises, automating the response to exposed credentials and breached identities provides a critical layer of defense and can materially reduce risk. A practical approach is to apply automation to prioritize stealer logs that contain enterprise credentials, using rules‑based detection that classifies each log by the relative risk of the domains and assets it references. Criteria for this prioritization approach include:

  • Critical business domains and single sign-on (SSO)/identity provider (IdP) endpoints (e.g., adfs.companyname.com, login.companyname.com)
  • Privileged roles and administrator scopes tied to the credentials
  • Presence of active session cookies or tokens
  • Multifactor authentication (MFA) posture and conditional‑access signals
  • Asset criticality and data sensitivity of the affected system
  • Evidence of reuse across multiple services

Threat intelligence prioritization rules help security teams focus their limited time and resources on the most relevant and critical threats. Examples of prioritization rules include:

  • A log containing credentials for adfs.companyname.com should be treated as a critical priority, immediately triggering incident response playbooks.
  • A log containing credentials for a lower‑impact system, such as a corporate merchandise site, may be classified as medium risk and handled with a streamlined remediation playbook.

LLM-Enabled Initial Access Broker (IAB) Analysis

IABs leverage stolen credentials, exploits, and phishing to gain initial access to enterprise environments. They then sell that access in the cybercrime ecosystem, often in forum posts that resemble auctions. Oftentimes, this is done without naming the organization that has been compromised; instead, high-level facts such as the enterprise revenue, industry, and headcount are provided.

Large language models (LLMs) have unlocked significant new possibilities for threat intelligence teams to not only automate data collection, but also to automate data contextualization at scale.

Large language models (LLMs) have unlocked significant new possibilities for threat intelligence teams to not only automate data collection, but also to automate data contextualization at scale.

An example use case involves using LLMs to identify IAB posts and assist in processing and analyzing massive amounts of unstructured text data from the dark web, hacker forums, and other sources. This helps automate and accelerate a process that is traditionally manual and time consuming.

Breached Credential Verification and Remediation

Breached credentials remain one of the most common initial access vectors exploited by threat actors. They also present a significant opportunity for enterprises to apply automation to validation and remediation. By establishing a relationship with a trusted threat intelligence provider, enterprises can receive timely alerts when employee email addresses and credentials appear in criminal marketplaces or stealer logs. Security engineering and identity teams can then build automated workflows to:

  • Validate whether exposed credentials are still active.
  • Enforce immediate remediation, such as password resets, forced logouts, or session token revocation.
  • Track exposure trends over time to measure the effectiveness of controls.

IoC Feeds for Threat Hunting

IoCs represent one of the most established forms of threat intelligence automation, yet many organizations struggle to extract meaningful value from IoC feeds. The challenge lies not in obtaining indicators, but in curating high-fidelity feeds that enhance detection capabilities without overwhelming analysts with false positives.

Effective IoC automation requires a strategic approach to feed selection and integration. This involves:

  • Prioritizing IoC feeds that align with the threat model and include attribution to relevant threat groups
  • Implementing automated scoring based on source reliability, age of the indicator, and historical false positive rates
  • Establishing decay schedules that automatically deprecate or remove stale indicators to prevent detection drift
  • Cross-referencing IoCs against known legitimate infrastructure to reduce false positives

Measuring a Threat Intelligence Program

Measuring the success of a threat intelligence program is frequently seen as extremely challenging, but it does not have to be. When PIRs are correctly aligned, they contain specific KPIs that relate back to metrics the enterprise can quantify in terms of a concrete ROI. For example, as illustrated in the earlier case study, a threat intelligence team tasked with preventing account takeover attacks targeting the customers of a financial institution can very likely identify a specific cost associated with each account takeover and conduct a retrospective to identify what percentage of those their mitigation plan prevented.

Setting these metrics, however, can be a difficult task. When done poorly, metrics can incentivize behaviors that are unaligned or even misaligned with broader business objectives, so it is imperative that organizations create and test suitable metrics.

When done poorly, metrics can incentivize behaviors that are unaligned or even misaligned with broader business objectives, so it is imperative that organizations create and test suitable metrics.

A three-step framework for setting effective metrics is as follows:

  • Align metrics with real-world outcomes—Create metrics that closely mirror the objectives that lead to the greatest risk reduction for the organization. For example, if the organization’s threat model indicates that the largest risk to the enterprise is an account takeover attack leading to ransomware, the threat intelligence team might build metrics around preventing these attacks.
  • Include qualitative analysis—The ROI of intelligence can be large but difficult to measure. Therefore, it is crucial to include both quantitative and qualitative metrics in the evaluation of threat intelligence ROI.
  • Tie threat intelligence objectives to other cybersecurity teams—When applied effectively, threat intelligence provides the foundation for a threat-led cybersecurity program in which controls are designed and refined based on threat modeling and current intelligence on adversary TTPs. To demonstrate value, threat intelligence should be explicitly connected to the performance of other cybersecurity functions. For example, when intelligence informs an existing control, measure its effectiveness before and after the update. This not only validates the impact of intelligence but also reinforces alignment across teams such as security operations, IAM, and incident response.

Example metrics that can be used to streamline a threat intelligence program include:

  • Number of intelligence requirements fulfilled per quarter
  • Time from intelligence collection to dissemination (e.g., target: <24 hours for tactical intelligence)
  • Coverage percentage of priority threat groups relevant to the threat model
  • Intelligence source diversity index (prevents overreliance on single feeds)
  • Percentage of security incidents where threat intelligence provided actionable context within the first four hours

In the end, it is important to work with organizational stakeholders and be creative when setting the right metrics. Some particularly sophisticated organizations measure things such as the cost of buying an exposed customer account in the cybercrime ecosystem, operating on the idea that higher prices indicate they are doing a good job preventing accounts from being exposed.

Conclusion

Threats do not exist in a vacuum—adversaries pair specific TTPs with the exact exposures they can find across an enterprise’s external footprint. Threat‑led programs are effective because they convert intelligence into decisions and decisions into measurable risk reduction. That requires three ingredients in concert: a clear threat model with well‑formed PIRs, automation that scales collection and response, and governance that ties outcomes to enterprise risk.

Done poorly, threat intelligence produces noise, unprioritized feeds, and uncertain returns. Done well, it becomes an input for enterprise risk decisions, not a parallel activity. It tells security where to harden controls, product where to close high‑impact weaknesses, procurement where to impose conditions on third parties, and executives where to invest.

Endnotes

1 Nichols, J.; “Too many threats, too much data? New survey shows how to fix that,” Google Cloud Blog, 28 July 2025
2 World Economic Forum, “Why we need global rules to crack down on cybercrime,” 2 January 2023
3 Reuters, “US Charges Russian-Israeli Dual National Tied to LockBit Ransomware Group,” Reuters, 20 December 2024
4 Verizon Business, “2025 Data Breach Investigations Report (DBIR),” April 2025
5 Bradbury, D.; “Santander Employee Data Breach Linked to Snowflake Attack,” SecurityWeek, 31 May 2024
6 Mauro, C.; Sayers, J.; “The Intelligence Edge: Opportunities and Challenges of Emerging Technologies for US Intelligence,” Center for Strategic and International Studies (CSIS), 13 April 2021
7 Office of the Director of National Intelligence (ODNI), Intelligence Community Directive 203: Analytic Standards, USA, 2 January 2015
8 US Department of Justice, Legal Considerations When Gathering Online Cyber Threat Intelligence and Purchasing Data from Illicit Sources, USA, 2020