Since I first began building internet firewalls in the late 1980s, I have periodically encountered claims that “the perimeter is dead” or “firewalls don’t work.” These claims are rather obviously wrong: your firewall or perimeter are simply a way of separating things so you can organize them better. An internet firewall is an organizing principle between “stuff that’s not your problem” (the internet) and “stuff that’s your problem” (your network).
At a finer level of detail, you might apply other organizing principles such as “my data center” and “the unmanaged cloud of desktops” or “our PCI cloud.” If you think of firewalls or perimeters as a way of organizing the various entities you deal with, you’ll be able to better understand your strategic objectives for where data moves, how it moves and where it sits. Without that type of organization, the idea of a network that is “yours” is purely imaginary.
If you think about firewalls and perimeters as an organizing principle, you’ll be able to see how single servers can be a “cloud of one” whether they’re on premise or off, and you can think about the trust relationships between remote servers and internal services. It’s a valuable mental tool, in other words.
We (or rather management) also can make mistakes by forgetting there is a persistent management cost for design. Organizing your computers and thinking about where data moves and how it is stored is expensive. It takes understanding and thought to design this stuff, and if it’s not done right, you wind up with a mess. A typical mess might be: “everything can talk to everything,” which is certainly easy to set up, requires no ongoing management, and is – for all intents and purposes – impossible to secure. It seems to me that a lot of executives expect tremendous cost-savings from moving to the cloud, but they don’t realize that you still need good systems people (to manage the cloud systems using the cloud providers’ interfaces) and governance/analysis (to think about where your data is moving and why). In other words, the thinking is the hard part.
Beyond security, it’s important to think about performance and reliability. If you figure out where your most important servers and data are, you can optimize your network architecture to guarantee best performance where it needs to be. Otherwise, in an “everything can talk to everything” network, your only option for performance tuning is to make everything faster. That’s an important distinction to keep in mind as we collectively move to software-defined networks. The organizing principle that leads to securing your data is also the organizing principle that allows you to optimize your data paths.
A senior IT person at a large enterprise told me, “We have web services all over the place. We use a vulnerability scanner to identify systems that are offering up data on port 80, then we track them down and analyze them.” Think about that for a second! If the organization has a purely reactive governance model like this, how will that enterprise move to a high-performance software-defined network? To map out your performance requirements, you need to know where the data is going to flow. You cannot do that if you’re permanently reverse-engineering your design using what I call “forensic network architecture.”
When we talk about disaster recovery or data backups, the same reasoning applies: you can’t back up your data if you don’t know where it is (organizing principle: data perimeter), and you can’t identify which systems need to be recoverable/reliable if you don’t know which they are (organizing principle: data center perimeter). None of this is a new problem, but, unfortunately, a lot of organizations are going to keep kicking the can down the road, so they can preserve their hard-won ignorance about what’s going on inside their perimeter.
Editor’s note: For more of Marcus Ranum’s insights on this topic, download The Vaguely Defined Perimeter.