Given the volume of media coverage, there has been no missing the recent Facebook hack that impacted the accounts of 50 million Facebook users. Whether you’re a cybersecurity or assurance practitioner reading the details in the trade press, a Facebook user seeing the notifications from Facebook, or just someone who reads the news headlines, coverage is everywhere, and it’s taking the connected world by storm.
Anytime an event of this magnitude occurs, there’s always fallout as people seek to understand the problem, figure out what happened and why, and share advice about how to recover. But once the immediate “Sturm und Drang” has passed by, there’s a temptation to return to the status quo. In this case, I think that’s a mistake. Specifically, I say this because there are a few fantastic lessons learned that we as practitioners can incorporate into how we conduct security for our own organizations. Meaning, by understanding what happened with the Facebook situation, we can better position ourselves to ensure that similar things don’t happen to us. Likewise, remembering what transpired can help us to insulate ourselves in the event something similar happens down the line.
With that in mind, I think there are a few important takeaways that we should pay attention to. And while I’m sure that there are dozens of potential lessons above and beyond the ones that I’ve cherry picked here, I’ve tried to focus on items that are universally applicable to any shop regardless of size, industry vertical, or other constraints.
Lesson 1: Application Authentication/Authorization State Maintenance
If you’re familiar with what occurred (I won’t recount it again here, but the details are worth reading if you haven’t already), Facebook’s “view as” functionality was undermined in such a way as to allow an attacker to obtain an access token. This token, an encoded string that allows the application to recognize the user, is needed since most modern applications are designed around the principle of representational state transfer. (REST.) REST or “RESTful” architecture means that individual subcomponents of the application are stateless – so important pieces of state information (like, for example, “who is this user, have they authenticated, and are they allowed to do what they’re asking”) is sent along with each request to allow each piece of the app to “reconstruct” state, including important authorization and authentication decisions. This architecture has a number of advantages – scalability, performance, reliability, etc. – but a few disadvantages, as well. One key disadvantage? Anybody who can steal a token can undermine the authentication and/or authorization model.
The upshot of this is that for any application – but especially for large, interconnected, and inter-dependent ones – maintenance of state is of utmost importance to security. Any situation where a token (be they OAuth bearer tokens or something as simple as a session maintenance cookie) can be stolen or guessed can impact the security of the app. This, as you’d guess, is a common problem: there’s a reason this has been on every version of the OWASP Top 10 published to date.
How can you address this? One useful strategy is to specifically and systematically evaluate state maintenance mechanisms as part of your application testing or pre-deployment vetting procedures. If, for example, you do application threat modeling on applications during design, look specifically at state. If you do a pre-deployment pen test or scan, make sure state is specifically included.
Lesson 2: Application Security Generally
Moving beyond the specifics of RESTful application state, there’s also the broader question of application security. We all know that applications change frequently nowadays – DevOps and other approaches have accelerated release schedules (in some cases to the point of multiple code pushes happening over a period of seconds or minutes). As releases become more automated, code becomes more complex and inter-dependent, release cycles accelerate, and application security becomes even more important than it already was. But yet, very often security programs invest only minimally in this. Tools like Application Threat Modeling are used only infrequently, and dynamic/static application testing is only done on a small subset of released applications.
There are a number of reasons why this is the case. Application security, for example, requires a little bit of a different skillset compared to other specializations within security. That said, making sure that you have controls built specifically to help find and address application issues is a good idea. It’s only likely to become more critical as our lives and businesses become even more “software and application dependent” as time goes by.
Lesson 3: Single Points of (Identity) Failure
The last item that I’ll mention here is the lesson about single points of failure. One of the biggest impacts of the Facebook breach is the number of external services that rely on Facebook as an authentication mechanism and identity repository. There are thousands of other applications out there (many of which have absolutely nothing to do with Facebook) that, for the sake of convenience to the user (and the developer), have elected to “trust” implicitly the identity-related information coming from Facebook. While there’s no evidence that this has been directly exploited during the events of this week, the fact that it could happen has given some people pause.
Now, I’m not saying that we should all go back to the “bad old days” where every application kept (and made the user remember) their own separate identity information – after all, integration and standardization are good. However, it is useful to think about what the impact will be to you and your business if something catastrophic happens that impacts a single point like occurred here. And, by “think about it,” I mean plan for it. You might, for example, consider your own “trust but verify” approach, where you validate something about the user (e.g. their device fingerprint) – or, depending on the application – you might even consider a second factor. Either way, specifically looking at this during application design is prudent.
As I noted above, I’m sure there are dozens of additional lessons that one might derive from the Facebook hack. But, taking the time to evaluate what occurred – and think about how what we’re doing might be susceptible to the same issues – is always a worthwhile way to improve.