

For most organizations, aligning with the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) feels like a logical choice.1 It is widely accepted, structured, and comprehensive, at least on paper. But for those managing risk in process control network (PCN) environments, that sense of structure can be deceptive. NIST, as valuable as it is, was never designed with PCNs in mind. The assumption that IT-centric controls can be applied wholesale to operational environments is not just flawed, it is dangerous.
The Illusion of Full Coverage
PCNs are at the heart of industrial operations. These networks are composed of industrial control systems that monitor and control industrial infrastructure, including plants, power grids, pipelines, and more. However, PCN environments are nothing like typical IT environments.2 PCNs are built around uptime and physical safety, not flexibility or user access. They rely on legacy equipment, proprietary vendor protocols, and devices that often cannot be patched without risking disruption. Even small changes can carry operational consequences. So, when discussions take place regarding IT frameworks, such as NIST, being applied in these environments, decision makers often overlook just how nuanced the environment is. What works for an office network does not always work for a turbine.
Working in risk analysis across both enterprise IT and PCN domains has made one thing very clear: a single framework rarely fits both domains. NIST 800-53 and the CSF provide excellent guidance for corporate IT systems, where patching, configuration control, and access logging are manageable within standard administrative limits.
However, in PCN environments, where safety, uptime, and real-time responsiveness are critical, applying those same controls often proves impossible or counterproductive. For example, enforcing strict patching cycles or endpoint protection on a production server might introduce latency, trigger downtime, or even void vendor warranties. There have even been cases where a control meant to improve security increased operational risk such as mandatory timeouts that disrupted continuous monitoring screens or caused operator delays. In PCN environments, good intentions on paper can quickly backfire if real-world system usage is overlooked.
When Logging Policy Meets Reality
During a PCN risk assessment, an organization attempted to implement a standard 15-minute user inactivity timeout, a common control across IT systems.
On paper, the policy seemed straightforward. But in practice, it introduced operational risk. An engineer configuring a programmable logic controller (PLC) stepped away briefly during a vendor call, and the session timed out. Unsaved work was lost, and the system had to be reloaded from backups, delaying maintenance and creating frustration on both the engineering and compliance sides. The policy was ultimately revised to include a local exception process for key operational roles a reminder that "common" controls often are not common sense in a PCN.
In another case, a corporate logging standard required all activity logs to be streamed in real time to a central security Information and event management (SIEM). While this control works well in IT environments, its implementation in a legacy PCN system will inevitably slow down operations. Once the control was implemented, engineers noticed delays in response times, and a backlog in log transmission briefly masked a critical system alert. The logging architecture had to be redesigned with local buffering and scheduled uploads, balancing visibility with performance.
These were not technical missteps, they were cultural mismatches between compliance expectations and how OT environments actually function. They demonstrate how even well-intentioned logging policies can unintentionally introduce risk when they are not properly adapted to the systems they are meant to protect.
Where NIST Falls Short
NIST assumes systems are administratively flexible. However, most PCN systems are not. These environments often rely on:
- Legacy hardware where modern authentication protocols are unsupported
- Vendor-locked interfaces that limit configuration
- Systems where even brief downtime is unacceptable due to safety concerns
Trying to apply controls such as NIST AC-12 (session timeout) or AU-2 (audit logging) to legacy PCN assets can be technically infeasible.3 What is worse is that treating those failures as non-compliance without operational context misrepresents the organization's actual security posture. Misrepresenting OT limitations as non-compliance does not merely lead to ineffectual audits, it has the potential to send organizations in the direction of even greater risk. In these scenarios, organizations end up prioritizing controls that might not even be feasible, especially in safety-critical systems. In high-risk scenarios, such as shutdowns or manual overrides, that kind of misunderstanding can slow down operators, break workflows, and even create safety risk.
NIST, as valuable as it is, was never designed with PCNs in mind. The assumption that IT-centric controls can be applied wholesale to operational environments is not just flawed, it can be dangerous.Moreover, misrepresentation can result in resources being funnelled into fixing things that are not truly broken, while real risk, such as remote access gaps or poor change tracking, are overlooked. Over time, this inefficiency erodes trust between security and operations. Employees will ultimately stop seeing compliance as something that protects them and start seeing it as a checkbox exercise that frequently gets in the way. The danger is not just technical; it is cultural—and in OT, culture is what truly upholds safety.
Compliance Is Not Security, Especially in OT
Compliance reports that check every box under NIST may still leave operational risk . There are several key areas that are not addressed by NIST, including:
- Maintenance mode risk—where protections are bypassed during plant shutdowns.
- Jump host access for vendors—which often lacks proper multifactor or session recording in practice.
- Air-gapped asset visibility—tools designed for always-on networks cannot consistently monitor assets with intermittent connectivity.
These areas are not fully covered by NIST. Some organizations fill the gap by using ISA/IEC 62443, or they build custom controls such as tightly governed jump boxes, offline logbooks, or dual-approver access workflows.4
PCN environments require security programs that account for their physical and operational constraints, not just digital threats.
To be clear, this does not mean that organizations should abandon frameworks such as NIST. It means adapting these frameworks to suit the organization’s needs. PCN controls must be interpreted through the lens of industrial safety, uptime, and process continuity, not just policy enforcement. The result is a system that can withstand operating pressures: fewer process disruptions, more predictable performance, and reduced conflict between security mandates and production needs. The practical outcome is controls that serve the mission instead of distract from it. Not perfect, but closer to reality.
Practical Recommendations
The gap between policy and practice in industrial settings is wide and rarely discussed outside of glossy frameworks. These recommendations aim to shrink that space. They are not theory; they reflect what helps teams keep systems running safely and securely without adding layers of impractical effort.
- Conduct dual-gap assessments—Compare NIST control compliance with implementation feasibility in PCN systems.
- Map controls to PCN-aware standards—Use the International Electrotechnical Commission (IEC) 62443 or custom risk-based frameworks that better reflect operational realities.
- Document justified exceptions—Maintain a formal risk register for unimplemented
- controls, with rationale and compensating safeguards.
- Bridge communication gaps—Educate auditors and compliance professionals on OT system limitations so compliance is grounded in reality, not just regulation.
Closing Thoughts
Security and compliance must support the mission, not override it. In PCN environments, where safety and process reliability are the mission, organizations need frameworks that acknowledge the real-world limits of technology, not just its theoretical potential. NIST remains a valuable tool, but it cannot be the only map organizations consult.
Endnotes
1 National Institute of Standards and Technology (NIST), Cybersecurity Framework (CSF), USA
2 Malviya, N.; “Process Control Network (PCN) Evolution,” Infosec, 27 August 2019
3 This technical infeasibility is due to several things, including session automatic logouts which lead to the operator potentially losing visibility or control. Furthermore, re-authentication takes time and introduces latency in responding to alarms or interlocks. In high-risk zones, that delay could translate to equipment failure or even safety risk; NIST, SP 800-53 Rev. 5, Security and Privacy Controls for Information Systems and Organizations, September 2020
4 International Society of Automation (ISA)/International Electrotechnical Commission (IEC), ISA/IEC 62443 Series of Standards
Rashmi Tallapragada, CISA, CRISC
Is an OT risk analyst at Chevron with a background in both IT and operational technology. Tallapragada holds CISA, CRISC, and CEH certifications, and her work focuses on practical risk analysis, control execution, and automation. She is especially passionate about making compliance more meaningful for control performers and helping teams build trust across silos. Tallapragada was recently honored with the 2025 Cybersecurity Woman of the Year award, and actively mentors women in the field through programs she helped start.