I recently discovered a fascinating C-suite report that used an apt metaphor to capture why culture is so challenging for businesses: Organizational culture is like an iceberg. That was Deloitte’s take, and it resonates with me. The relatively small portion you see above the waves represents isolated, highly visible problems—like the employee who opens the door to an attacker by clicking on a link in a phishing email. But the bulk of the culture iceberg is submerged: the shared, but often hidden, beliefs and assumptions that ultimately allow those major security problems to occur.
That’s why creating a healthy cybersecurity culture is such a high priority—and also such a significant challenge. Employees are on the front line of a company’s cyber defense, and their involvement is critical not only in preventing compromise but also in helping the organization respond quickly to the few inevitably successful attacks. For this reason, I consider a security-aware workforce to be one of the three essential elements of a cyber-resilient organization, along with mature cybersecurity capabilities and security-focused technology operations.
The challenge is that building a cyber-resilient organization involves instilling a security-aware culture that involves all employees—including executives, managers and line workers, as well as IT and security experts. And changing the beliefs and assumptions of an entire workforce is not easy.
Yet meeting that challenge can deliver business benefits that extend far beyond a reduction in cyber-incidents, according to a landmark CMMI and ISACA study of the cybersecurity culture at more than 4,800 organizations worldwide. Yes, two thirds of organizations that successfully implemented a cybersecurity culture with substantial employee buy-in said they reduced cyber incidents as a result. That’s a huge benefit in itself.
But more than half of those companies also built strong customer trust and improved their brand reputation, and a substantial number increased profitability and speed to market. In fact, 87 percent of all surveyed organizations believe that strengthening their cybersecurity culture would increase profitability or viability. The financial implications are perhaps not so not surprising, since other studies have found that more than half of corporate data breaches result in significant costs, sometimes including lost revenue, not to mention the long-term impact of a tarnished reputation.
Editor’s note: Click here to read the rest of this blog post. For information on the CMMI Cybermaturity Platform, visit the CMMI website.
Everyone doing business today shares an unfortunate truth: no matter how strong your cybersecurity program, your employees are your biggest potential source of failure.
It’s not that you’ve hired bad people, but there simply isn’t enough understanding around the issues that are important to keep the company safe. This leads to increased vulnerability to social engineering and phishing attacks at a minimum, which can cause the potential for a greater incursion.
When it comes to cybersecurity, though, businesses are faced with a classic conundrum: How much money and resources should be spent on something that hasn’t – and may never have – happened? It’s easy to blame your employees for being susceptible to spear phishing attempts, but if they weren’t given proper training to spot them, then the fault lies elsewhere.
And that’s just the tip of the iceberg. According to a recent ISACA/CMMI survey on cybersecurity culture, more than 70 percent of companies have specific policies in place for password management, automated device updates and device security, as well as employee training and proper communication workflows in place. However, only 40 percent of respondents say that their organizations’ efforts to create a culture of cybersecurity with substantial employee buy-in have been more than moderately successful.
Interestingly, while 66 percent of respondents said their organization experienced a reduction in cyber incidents, several of the most common benefits were customer-facing: increased customer trust, better brand reputation, increased profitability and strong customer engagement. It appears that while employees may not care about cybersecurity, customers certainly do.
At my former company, Evernote, we suffered a security breach that affected 50 million users. The breach was contained quickly due to the training and procedures we had in place. More importantly, the damage to the brand was minimal due to the communication we had with the customers throughout the investigation. Interestingly, what we learned was that our customers were more annoyed with us at the heightened security measures we put into their accounts – now by default.
The most common support request at that time was for us to allow people to use their old passwords again – because people didn’t want to have to come up with a new one for each site they log into. (Rather than grant that request, we created training on the benefits of unique passwords.)
How, then, can you ensure you have a cyber culture that sticks? Here are three key components:
1. Find a “driving why”
There’s no surer way to demotivate someone to do something than to be told corporate wants them to do it. Likewise, employees are not usually swayed by talk of how much money the company will potentially lose, especially if it means they have to spend an extra 20 minutes every day on a new security process.
Instead, find a way to motivate employees to complete the process; for example, providing subsidized telecommunications plans for employees who install auto-provisioning software on their personal mobile phones rather than using a guest internet (or none.)
2. Train, then train some more
The cybersecurity threat landscape is changing rapidly. Every month there are new issues to tackle that didn’t even exist before.
Whether your company is established or just starting out, frequent communication and hands-on training is crucial to maintaining a safe and secure environment.
3. Lead from the top down
No matter how much training you provide, and what incentives you provide your team, if they don’t see leadership following the process, then everything will fall apart. In order to have a strong culture, you need strong leadership to model it.
With those points in mind, the cybersecurity culture of your organization can only grow stronger.
Editor's note: Heather Wilde will participate in a panel discussion on cybersecurity culture this week at ISACA's CSX North America conference.
Given the volume of media coverage, there has been no missing the recent Facebook hack that impacted the accounts of 50 million Facebook users. Whether you’re a cybersecurity or assurance practitioner reading the details in the trade press, a Facebook user seeing the notifications from Facebook, or just someone who reads the news headlines, coverage is everywhere, and it’s taking the connected world by storm.
Anytime an event of this magnitude occurs, there’s always fallout as people seek to understand the problem, figure out what happened and why, and share advice about how to recover. But once the immediate “Sturm und Drang” has passed by, there’s a temptation to return to the status quo. In this case, I think that’s a mistake. Specifically, I say this because there are a few fantastic lessons learned that we as practitioners can incorporate into how we conduct security for our own organizations. Meaning, by understanding what happened with the Facebook situation, we can better position ourselves to ensure that similar things don’t happen to us. Likewise, remembering what transpired can help us to insulate ourselves in the event something similar happens down the line.
With that in mind, I think there are a few important takeaways that we should pay attention to. And while I’m sure that there are dozens of potential lessons above and beyond the ones that I’ve cherry picked here, I’ve tried to focus on items that are universally applicable to any shop regardless of size, industry vertical, or other constraints.
Lesson 1: Application Authentication/Authorization State Maintenance
If you’re familiar with what occurred (I won’t recount it again here, but the details are worth reading if you haven’t already), Facebook’s “view as” functionality was undermined in such a way as to allow an attacker to obtain an access token. This token, an encoded string that allows the application to recognize the user, is needed since most modern applications are designed around the principle of representational state transfer. (REST.) REST or “RESTful” architecture means that individual subcomponents of the application are stateless – so important pieces of state information (like, for example, “who is this user, have they authenticated, and are they allowed to do what they’re asking”) is sent along with each request to allow each piece of the app to “reconstruct” state, including important authorization and authentication decisions. This architecture has a number of advantages – scalability, performance, reliability, etc. – but a few disadvantages, as well. One key disadvantage? Anybody who can steal a token can undermine the authentication and/or authorization model.
The upshot of this is that for any application – but especially for large, interconnected, and inter-dependent ones – maintenance of state is of utmost importance to security. Any situation where a token (be they OAuth bearer tokens or something as simple as a session maintenance cookie) can be stolen or guessed can impact the security of the app. This, as you’d guess, is a common problem: there’s a reason this has been on every version of the OWASP Top 10 published to date.
How can you address this? One useful strategy is to specifically and systematically evaluate state maintenance mechanisms as part of your application testing or pre-deployment vetting procedures. If, for example, you do application threat modeling on applications during design, look specifically at state. If you do a pre-deployment pen test or scan, make sure state is specifically included.
Lesson 2: Application Security Generally
Moving beyond the specifics of RESTful application state, there’s also the broader question of application security. We all know that applications change frequently nowadays – DevOps and other approaches have accelerated release schedules (in some cases to the point of multiple code pushes happening over a period of seconds or minutes). As releases become more automated, code becomes more complex and inter-dependent, release cycles accelerate, and application security becomes even more important than it already was. But yet, very often security programs invest only minimally in this. Tools like Application Threat Modeling are used only infrequently, and dynamic/static application testing is only done on a small subset of released applications.
There are a number of reasons why this is the case. Application security, for example, requires a little bit of a different skillset compared to other specializations within security. That said, making sure that you have controls built specifically to help find and address application issues is a good idea. It’s only likely to become more critical as our lives and businesses become even more “software and application dependent” as time goes by.
Lesson 3: Single Points of (Identity) Failure
The last item that I’ll mention here is the lesson about single points of failure. One of the biggest impacts of the Facebook breach is the number of external services that rely on Facebook as an authentication mechanism and identity repository. There are thousands of other applications out there (many of which have absolutely nothing to do with Facebook) that, for the sake of convenience to the user (and the developer), have elected to “trust” implicitly the identity-related information coming from Facebook. While there’s no evidence that this has been directly exploited during the events of this week, the fact that it could happen has given some people pause.
Now, I’m not saying that we should all go back to the “bad old days” where every application kept (and made the user remember) their own separate identity information – after all, integration and standardization are good. However, it is useful to think about what the impact will be to you and your business if something catastrophic happens that impacts a single point like occurred here. And, by “think about it,” I mean plan for it. You might, for example, consider your own “trust but verify” approach, where you validate something about the user (e.g. their device fingerprint) – or, depending on the application – you might even consider a second factor. Either way, specifically looking at this during application design is prudent.
As I noted above, I’m sure there are dozens of additional lessons that one might derive from the Facebook hack. But, taking the time to evaluate what occurred – and think about how what we’re doing might be susceptible to the same issues – is always a worthwhile way to improve.
The Dark Web is the part of the internet that is inaccessible by conventional search engines and requires special anonymizing software to access.
In colloquial terms, these are the darkest corners of the internet, where a widespan of nefarious activity takes place, as highlighted in the graphic below.
The Dark Web raises many questions, even among security professionals. Here are some answers to some of the questions that surface most frequently:
How can I check to see if my information has been stolen?
You can check to see if your email address has been compromised by using https://haveibeenpwned.com" target. If your information is present here, it is likely available on the Dark Web as well.
What are some examples of Dark Web, or The Onion Router (TOR), sites?
The Dark Web features marketplaces, forums, search engines, paste sites, social media sites, and chat rooms.
What actors use the Dark Web?
Six categories of threat actors exist on the Dark Web:
- Nation-states that utilize Advanced Persistent Threat (APT) tactics use the Dark Web for reconnaissance and espionage purposes.
- Cybercriminals often use marketplaces in order to achieve monetary benefit.
- Hacktivists attempt to establish a social or political cause across all different types of platforms.
- Terrorists seek to spread propaganda and recruitment.
- Insiders are motivated by a variety of factors, but oftentimes leak sensitive data onto the Dark Web for reprisal against their employer or for financial gain.
- Lastly, there are curious threat intelligence analysts who want to learn more from the Dark Web, assist in bug bounty programs, or enhance their technical skillsets.
What are some case studies of Dark Web sites?
Various data is stolen and sold on the Dark Web. Below are just a few examples:
- Financial information: Credit and debit cards are sold across many forums and marketplaces. Stolen cards come from all countries and data breaches. Oftentimes, they are sold for as little as $1. Tax data, including W-2 forms, are also popularly sold on the Dark Web.
- Personal Information: Everything from names, addresses, Social Security Numbers (SSN), dates of birth, and even an associated Starbucks account, is sold on the Dark Web. When this information is compiled together and sold in a transaction, these data dumps are called “fullz” because they contain all of a person’s identifiable information.
- Health records: Although health records are harder to find, they are becoming more available by the day. This is a growing concern and a vulnerability for the future.
- Miscellaneous: Drugs are everywhere on the Dark Web – you can purchase virtually any prohibited item imaginable. Moreover, you can purchase or simply download information that can be damaging to an individual – such as stolen information from the extramarital dating website Ashley Madison. You can also purchase a hacker or exploit to carry out an attack against an organization of your choosing. The possibilities are limitless.
Anything else you would like to add about the Dark Web?
I want to note that the underground criminal community has expanded to encompass anything you can imagine – goods, hitmen, even “hacker clothes.” Most of the websites have an Amazon-type feel to them, in which buyers provide seller feedback and note the authenticity of the stolen goods/services/information. The majority of transactions are handled in cryptocurrency (usually bitcoin), mail forwards, and electronic gift cards. I don’t encourage anyone to do their Christmas shopping here, though.
About the author: Wanda Archy is a cyber threat intelligence specialist focused on Dark Web investigations. Currently, Wanda is a Supervisor in RSM's Security, Privacy, and Risk services. She received her Master's degree in Security Studies and Bachelor's degree in Science, Technology, and International Affairs from Georgetown University. Wanda has her CISSP, CEH, and Security+ certifications, and speaks Russian.
When we talk about risk management, we are often fixated on protecting data confidentiality and mitigating related risks, but there are other equally compelling concerns, such as data availability. Consider the case of the NotPetya malware, which last year attacked the shipping giant Maersk among other companies.
For Maersk, the attack resulted in the loss of millions of dollars, delayed shipments, and required endless hours of manual paperwork to rebuild every laptop and server for this global company. It is a cautionary tale about data availability risk, continuity of operations and disaster recovery.
The Wired article on this subject reveals the details of how an international, multibillion dollar company was hit by NotPetya: a lethal cocktail featuring a penetration tool called EternalBlue, combined with Mimikatz, a tool that allows hackers to harvest passwords in the RAM of a Windows machine.
By the time Maersk’s security and IT professionals realized what was going on, it was too late—their data was wiped. NotPetya, a malware named for its similarity to the ransomware Petya, was particularly harmful because it didn’t ask for a ransom and no keys were presented for data recovery. Created to disrupt on a global scale, NotPetya left its victims—and the global, interconnected community—facing the harsh new reality of cyberwarfare.
How is this a story about data availability? The glaring flaw in the risk management strategy (where it existed) of NotPetya’s victims, like Maersk, was in their backup and disaster recovery plans. (Unfortunately, at TalaTek, we have seen poor backup and recovery plans more times than not.)
According to published reports of the incident, Maersk had a few hundred domain controllers across the globe, all of which were wiped out in the first few seconds of the attack. The company’s backup plans had not prepared for this scenario. While the IT department thought there was enough redundancy to protect the company from the impact of outage or failure of any given domain controller, no secure encrypted backups existed to recover a lost domain controller.
From what we can determine, Maersk didn’t plan for the likelihood of a zero day attack that would wipe all domain controllers at once. Without domain controllers to manage access and permissions to the various servers and data structures, there was no way to recover the data.
Fortunately, Maersk had one small bit of luck. An unplanned power outage kept a single server from getting infected, helping preserve a lone domain controller in an office in Ghana. When the office in Ghana didn’t have sufficient bandwidth to synch up the data center over the internet, a relay race was set in motion in which personnel from Ghana frantically met personnel from London in Nigeria! A perfect Ethan Hunt mission.
The scary reality of the new cyberwarfare landscape is that we are highly susceptible to this risk and cannot defend our digital systems fast enough. We are faced with the reality of being only as secure as our weakest systems. Governments, hospitals, airports, water treatment plants and food manufacturers and distributors — you name it— are all at risk.
A single machine at Maersk started the global meltdown that resulted in an estimated $10 billion in losses when all was said and done. Describing the NotPetya attack, the Wired article observes, “It crippled multinational companies including Maersk, pharmaceutical giant Merck, FedEx’s European subsidiary TNT Express, French construction company Saint-Gobain, food producer Mondelēz, and manufacturer Reckitt Benckiser. In each case, it inflicted nine-figure costs. It even spread back to Russia, striking the state oil company Rosneft.”
What to do? Here are my top 10 tips. Unfortunately, there is nothing glamorous about the work that needs to be done, including:
- Have good, old-fashioned backups of everything.
- Test your backups frequently to make sure they work.
- Use separate keys for encryptions of backups. If you lose your primary data and the secondary backup has a different key, chances are higher that you will be able to recover your data.
- Design tabletop exercises to mirror disaster and recovery situations that are detailed and realistic, and practice corrective actions.
- Train everyone in your tabletop tree on their roles and responsibilities when a disaster strikes.
- Have redundant service providers and redundant locations where data is stored.
- Back up your domain controllers.
- Document detailed and thorough restoration plans to rebuild every server.
- If you are using cloud solutions, leverage the power of the cloud to design effective solutions for recovery.
- Always understand your risks and remember that the landscape has changed dramatically. Be sure to revisit your cybersecurity plans for disaster recovery every six months.
If you are like any of the security leaders with whom I typically speak, you face (at least) the following burning problems:
- A security compromise cannot happen on my watch!
- If I invest resources in a particular security approach – whether it be people, products, process, or a combination of all three – how do I know that it will pay off to actually deliver on my goals?
Does this sound like you? If so, I’m here to help!
Extrapolating key learnings from more than 14 years of security research, including hacking everything from cars to medical devices to Internet of Things, my upcoming session at next month's CSX North America conference will transform you into a more empowered sentinel, capable of implementing a robust and capable security mission. To deliver this transformation, I will outline a three-phase action plan, complete with concretely actionable steps to help you accomplish your mission.
Phase I: GRASP
Most leaders think security is about activity, when actually it is about integrity. Many organizations approach security by just beginning to take action, chopping down the many tangible milestones that need to be addressed in order to arrive at a secure posture. While many of these activities are indeed important, before appropriate action can be taken, an organization must first understand what it needs to accomplish and why. Far too often, organizations want to jump right into the doing, without first performing the planning. As the Cheshire Cat so wisely stated in Alice in Wonderland: “If you don’t know where you are going, any road will take you there.” The purpose of the GRASP phase of the action plan is to define exactly where you are going, why that direction is important, and how you will approach pursuing it. During my address at CSX North America, I’ll elaborate on this concept, exploring the key facets of this action plan, including:
- Define Your Goal
- Understand the Business Context
- Implement Threat Model
Phase II: ASSESS
Most leaders think security is about process, when actually it is about dedication. Many organizations stumble into what I refer to as “the compliance trap,” wherein the organization seeks to outline a prescribed list of controls and then certify how compliant they are with this framework. However, such checklist-oriented security models are inherently flawed because they do not account for the nuances and other characteristics unique to that organization; thus, even a “compliant” system will have gaps in its security posture. Instead, organizations should focus not on process-based compliance, but rather should focus on dedication. This requires an organization to truly understand the reality of how their system might be attacked, identify exploitable vulnerabilities, and determine how to remedy those flaws. During my address, we will examine key actions to this phase, including:
- Break Security Features
- Chain Vulnerabilities
- Strategize Mitigations
Phase III: ADAPT
Most leaders think that security is about achieving a “clean bill of health,” when actually it is about education. Organizations commonly have a desire to obtain a record that states their system to be free from security flaws, which they can then use for marketing and sales enablement purposes. However, this thinking assumes security to be static, when in fact security is dynamic. Attackers evolve, attack methods are innovated, market conditions change, and technology iterates. All of these evolutions fundamentally change the threat model and attack landscape, requiring an organization to adapt accordingly. To be effective, organizations need to be constantly educating themselves, learning, and evolving. During my address, we explore the core facets to this phase, including:
- Reassess System
- Study Attack Evolution
- Update Security Models
Author’s note: If you believe that it is important for you to acquire tangible guidance that will enable you to make a meaningful impact on your security mission, then I hope you’ll join my session, “Flatlines For Show, Exploits ‘Oh’ No!,” at CSX North America on 15 October. My purpose is to empower others to make such an impact. I’ll be telling stories, showing attack demo videos, and equipping you to be successful!
Do you struggle to keep up to date on the latest cybersecurity terminology?
Fear not, you are not alone.
Behavioral microtargeting, cryptojacking, fileless malware, malvertising, cloudlets, unified endpoint management and sextortion are just some of the terms cropping up with increased regularity over the past two years.
“Hey Raef, BA was just subject to a digital skimming cyberattack. Can you write a piece on that?”
I could have taken a reasonable guess at what that term means, but guesswork combined with writing for magazines is a fast way to lose credibility. Added to that, I have been maintaining a publication called The Cybersecurity to English Dictionary for a few years now. That has meant that my spidey senses tingle each time someone drops in a new term.
- Is it something I will need to add to a future edition?
- Was it just a term made up by an eager marketing department?
- Does it reflect an emerging cybercrime trend or defensive technology?
A few years ago, maintaining the dictionary was a joyful skip in the park. Rarely did a new term worth defining emerge – and most of the expansion between editions was down to just extending the existing vocabulary it covered. Now, there are new terms thrown around on at least a weekly basis.
The problem is threefold:
- Cyber criminals are rapidly developing new threat tactics in an attempt to send their industry over the trillion-dollar threshold.
- New vulnerabilities and exploits are requiring new defensive technologies and processes. As an example – consider how Spectre and Meltdown drew many of us into looking more deeply at potential processor security gaps.
- The budgets being assigned to cybersecurity are attracting a lot of marketing spend. Is that new term just marketing spin or does it have real value?
Together, this trinity of issues has meant that staying apprised of the language of cybersecurity has not only become tougher – but is continuing to get harder because the evolution seems to be accelerating.
How do you keep up to date?
For me, one of the best sources of real information comes from attending ISACA conferences. It is a good way to find other professionals in similar roles and compare notes on the reality of each of our environments. Those presentations also are a great way to pick up on exactly what real-world security functions are doing.
Spending a few thousand corporate dollars on attending a conference can often yield substantial returns on investment for your organization. It is a place where you can get insights into the best practices that are really working – unlike sales presentations where information is often mixed with a substantial degree of marketing spin.
Security conferences, real world consulting and news stories are my own primary sources for understanding the evolving language of cybersecurity.
Despite that, there is still a challenge. Although the principles behind cybersecurity have largely remained the same, the methods for achieving effective security are changing fast.
Perhaps one indicator is that in the most recent update to my dictionary, I found that I had more than 100 new terms – roughly a 30% increase over the previous edition.
For example, where we once talked about anti-malware and anti-virus, discussions have now moved on to unified endpoint management.
Cybersecurity can be like learning a new language, and it is not just the information security professionals who find keeping up to date with the topic a challenge. Now that data breaches are a frequent topic for the C-suite, executives have a regular need to understand complex cybersecurity topics in plain and simple language.
The good news is that there are some great FREE resources out there to help decipher the terminology. One of those is the ISACA glossary; another is the somewhat shorter UK NCSC (National Cyber Security Centre) glossary.
In the meantime, for me, it’s time to start collating and demystifying the new terms for the 5th edition of my dictionary – and with the speed of evolution in the cybersecurity market, that might be something I have to do sooner than I would like.
Editor’s note: The Cybersecurity to English Dictionary, 4th Edition is available beginning 24 September 2018.
Stakes are increasing when it comes to leveraging technology to define and deliver new value. The CEO and the executive team leaders are reeling with the challenges of identifying and implementing new digital business models while also wrestling with making smart capital investments to develop and mature organizational capabilities that enable agility and rapid response to new market opportunities. At the same time, board directors are in a quandary, attempting to make sense of the digital landscape, and to obtain assurance that their CEO and executive team leaders are enabling the right culture, acquiring and nurturing the right talent, validating that the technology investments are prudent and reasonable, and effectively capitalizing on business opportunities while mitigating security concerns that pose significant risks to the company’s financial position and reputation.
Many refer to this point of time as the era of “digital disruption” for “digital transformation.” For me, these phrases seem somewhat of a misnomer. Taking a more macro and holistic look at this period, and reflecting on past history as a means to understand where we are and where we are headed, perhaps what we’re really witnessing is a revival of classic laissez faire economics. Market forces are being reshaped by technology in ways never previously imaginable. The pace of technology-driven innovation is far exceeding the ability of government and regulatory entities to put corresponding consumer protections in place, even as organizations struggle to recalibrate their information and technology governance and security to adjust to business opportunities appearing and vanishing in much shorter cycles. What’s really at stake today is the longer-term survivability of enterprises as we know them, coupled with the coming of inconceivable shifts in jobs and how people will work. And we find ourselves merely at the tip of the digital economy iceberg.
Dr. Peter Weill, director of MIT’s Center for Information Systems Research in Cambridge, Mass., says that, “in a digital economy, the whole company is responsible for generating value from digital investments.” To address this challenge, his research identified three key components on which enterprises must focus. First, there is the strategic, which is envisioning how the company will operate in the future. Second, there is oversight, which is making sure the major investments and organizational change is on track. Third, and of critical importance, is the defensive, which is effectively meeting the challenges of security, privacy, and compliance on an ongoing basis.
Key to meeting the aforementioned challenges? People, of course. No wonder that in Gartner’s recently released list of barriers to becoming a successful digital business, talent emerges as among the most significant. Not surprisingly, many organizations still follow the same hiring protocols they did 10 years ago. While arguably some criteria for new hires haven’t change, such as having a strong work ethic, a knack for problem-solving, good time management skills, and a thirst for continuous learning, there needs to be increased focus on recruiting those who demonstrate that they are digitally savvy or are grasping the need to prioritize growing their skills in this area. This means understanding how new and emerging technologies can be deployed, how to harness big data and statistical analysis to shape new approaches to product development and deployment, and applied knowledge of technologies that are or will shape the future of business, including the likes of cloud computing, AI and machine learning, blockchain, augmented reality, and perhaps even the promise of quantum computing. These attributes, along with a propensity to be comfortable with risk and uncertainty, should most importantly enable hiring managers to see whether candidates exhibit the right chemistry to fit into the corporate culture. Simply stated, traditional organizational hiring practices must be modernized to cultivate the right talent in order to successfully meet the challenges of the digital economy.
So, let’s not be fooled into thinking we’re okay because our company ship has yet to hit that digital economy iceberg. This iceberg runs long and spikes just beneath the surface. Navigating around it calls for “all hands on deck.” Traversing these choppy seas without incident means establishing and maturing the capabilities our organizations will need to turn on the dime when things matter most. The only way the CEO and executive teams can become confident is if the right talent is in place. Similarly, the only way for boards to obtain the assurance that the corporate ships are in good hands is to be convinced that the CEO and executive teams have established the right culture with the right people, and that they are effectively addressing the strategic, oversight and defensive components necessary to generate value from digital investments. As Peter Weill notes, “How good are you at each of these will predict your likely success in the digital economy.” I could not agree more. We find ourselves in exciting times---perhaps just as exciting as those who were paving the way of laissez faire economics back in the 18th century.
Editor’s note: This article originally published in CSO.
As digital business hastens the speed of application development and gives way to complex, interconnected software systems (think Internet of Things, microservices and APIs), we need to address that penetration testing, although thorough, is slow and expensive. On average, it takes eight months to identify and understand the cyber and regulatory risks associated with any new software, according to research from security company Sonatype.
Software development trends are compounding the issue in that software is being built and released faster (see the “Agile Manifesto”), but the tools and people resources to address security risk are not keeping pace.
Trends such as DevOps that require security teams to deliver deep integration and the automation of security tooling drove us, in conjunction with Centre for Secure Information Technologies at Queen’s University Belfast, to ask the question, “What is the path to self-securing software?”
Penetration testers and tools will only scan the website they can observe; there could be many aspects missing from the testing scope. However, what is really interesting is that in reality, the CODE contains everything that the website can do (functionality, data, etc.).
We were interested to discover if there a way to scan code to automatically understand WHAT it is. For example, is it a website or desktop application? Does it allow the user to enter financial info or personal details? If it does, where is that info stored? This information can be used to drive other testing tools or penetration testing by informing them of what the code is, the associated functionality, data types, etc. In essence, this information can automatically inform the scope and focus of security testing.
We looked at source code parsing technology, and how, by using it, we can determine what a web application actually is/does. Antlr was deemed to be a popular tool in this area, allowing us to build a tool that scanned website source code and provided us with a digital understanding of the website. We could then use that data to drive automated security tools.
The result? We were able automatically understand the attack surface of a website by scanning the code. We could then use that intelligence to further drive manual, commercial or other open source testing, facilitating continuous, and automated, security testing of developing code. Since the orchestration and execution of security testing was automated, it could easily be wrapped into development teams’ daily (or weekly) processes, flagging security issues long before the system was deployed externally.
We believe that the tool we created (and have further developed at Uleska) is addressing the “pressing need to orchestrate tools and automate testing in a continuous delivery pipeline and facilitate AST at scale, as well as improve context and prioritization for remediation efforts” that Gartner has identified for so-called ASTO (Application Security Testing Orchestration) tools that are coming onto the market.
After decades of presentations and prayers, security has finally become a business imperative for executives and boards alike. Business leaders are speaking publicly about championing security investments, as it’s important for shareholder value and future expectations. In fact, evidence-based security effectiveness measures are finding their way into annual reports (10-Ks), committee charters, and corporate governance documents.
Because of the spotlight that is on security, your business leaders are demanding security effectiveness evidence from you. This evidence is similar to the data-driven measurements and KPIs seen in other strategic business units such as shareholder return, client assets, financial performance, client satisfaction, and loss-absorbing resources.
Your leaders are making decisions predicated on these non-security measures every day to increase value for their shareholders, address stakeholder requirements, and mitigate business risks. Security is simply another variable in the business risk equation. In fact, your security program isn’t about security risk in and of itself, but rather, the financial, brand, and operational risk from security incidents.
One area where the need for security effectiveness evidence is profusely obvious is around rationalization. For example, many auditors no longer ask, “Do you have security tools in place to mitigate risk?” because the answer is always, “Yes, but we need more tools, training, and people anyhow.” Now auditors are asking for rationalization in terms of, “Can you prove, with quantitative measures, that our security tools are adding value? And can you supply proof regarding the necessity for future security investment?”
This evidence-based, rationalization methodology, often characterized as security instrumentation, aligns with the reality that your organization has finite resources to invest in security and that all investments need to be prioritized. Every dollar invested in security is a dollar not applied to other imperatives.
Measuring your security effectiveness: where you’ve been
The sad truth is that most security effectiveness measures are assumption-based instead of evidence-based. Because of a lack of ongoing security instrumentation, you assume your tools and configurations are doing what is needed and incident response capabilities are a well-choreographed integration of people, processes, and technologies. You know that assumption-based security is flawed. But historically, you haven’t had a way to empirically measure security effectiveness. You get some value from penetration testing, the endless march of scan-patch-scan, surveys, and return on security investment calculations, but these approaches don’t truly measure your security effectiveness. As a result, your business leaders are relying on incomplete and/or inaccurate data to make their decisions.
Where you need to be
You need to know if your security tools are working as intended. Once they are, you can optimize those tools to get the most value, rationalize, and prioritize where greater investment is required, and retire tools no longer needed. Then you can monitor for environmental drift so that when a tool is no longer working as needed, you are alerted to the drift and how to fix it. Finally, from a leadership perspective, your team can consider security effectiveness measures when calculating the business risks.
How to get there
By safely testing your actual, production security tools with security instrumentation solutions, not scanning for vulnerabilities, not looking for unpatched systems, and not launching exploits on target assets, but actually testing the efficacy of the security tools protecting your assets, you can start measuring security effectiveness of individual tools as well as security effectiveness overall. When gaps are discovered, you can use prescriptive instrumentation recommendations to address those gaps. Then you can apply configuration assurance to retest the security tools to validate that the prescriptive changes implemented resulted in the desired outcome. Once you have your security tools in a known good state, automated testing can continue validation in perpetuity, alerting you when there is environmental drift.
The end result of security instrumentation is security effectiveness that can be measured, managed, improved, and communicated in an automated way. Your security teams are armed with evidence-based data that can be used to instrument security tools, prioritize future investments, and retire redundant tools. This newfound ability to communicate security effectiveness and trends based on actual proof allows your decision-makers to incorporate security effectives measures when making business decisions.
Author’s note: Brian Contos is the CISO & VP Technology Innovation at Verodin. He is a seasoned executive with over two decades of experience in the security industry, board advisor, entrepreneur and author. After getting his start in security with the Defense Information Systems Agency (DISA) and later Bell Labs, he began the process of building security startups and taking multiple companies through successful IPOs and acquisitions including: Riptech, ArcSight, Imperva, McAfee and Solera Networks. Brian has worked in over 50 countries across six continents. He has authored several security books, his latest with the former Deputy Director of the NSA, spoken at leading security events globally, and frequently appears in the news. He was recently featured in a cyberwar documentary alongside General Michael Hayden (former Director NSA and CIA).