I recently attended a security conference with multiple speakers covering a wide variety of topics – one of the topics, “Zero-Trust Architecture” (ZTA), was being addressed by one of the vendors, and I decided to sit-in to listen. A few minutes into the session, two facts became blaringly apparent – the speaker, who shall remain nameless, 1) did not actually understand what Zero-Trust Architecture is and what it means to implement Zero-Trust, and 2) this was a sales pitch disguised as an educational seminar.
Unfortunately, presentations on this and other topics often are heavy on buzzwords that don’t actually contribute value or advance understanding. As the aforementioned session came to a close, the session transitioned into the Q&A portion – which subsequently happened to be the same time I lost more hope for our fellow cybersecurity aficionados after hearing some of the questions asked. Below are just a few of them:
After walking out of the session and regaining consciousness, I decided to take a little time out of my day to bring awareness to Zero-Trust Architecture and demystify what it means. First and foremost, ZTA is NOT a new technology. As illustrated by Palo Alto’s Cyberpedia article, achieving Zero Trust is often perceived as costly and complex. However, Zero Trust is built upon your existing architecture and does not require you to rip and replace existing technology. There are no Zero Trust products. There are products that work well in Zero Trust environments and those that don't.
Zero Trust is the term for an evolving set of network security paradigms that move network defenses from wide network perimeters to narrowly focusing on individual or small groups of resources. A ZTA strategy is one in which there is no implicit trust granted to systems based on their physical or network location (i.e., local area networks vs. the internet). In layman’s terms, the basic principles of zero-trust are:
- Assume the network is always hostile
- External AND internal threats are always present
- Internal networks are not sufficient to equally trusted
- Every device, user, and network flow MUST be proven
- You must log and inspect ALL traffic
These security principles are a stark contrast to what most organizations currently implement, which is perimeter-based security, which adopts the following basic security principles:
- Internal access is trusted
- External access is untrusted
The major shortcomings of perimeter-based security are that:
- Inside access is not always friendly
- Modern attacks are inside-out, rather than outside-in
- Trusted systems bring attackers in
- Internal access is more loosely regulated
Most organizations go a step further and implement logical segmentation, such as separating different organizational components within their own subnets, implementing a demilitarized zone (DMZ), Web Application Firewall (WAF) and more. However, this approach is starting to show its age as the foundation of perimeter-based security primarily follows “trust and verify,” which is fundamentally different from ZTA’s paradigm shift of “verify, and then trust.”
Another fundamental concept that pairs well with ZTA is Trust Over Time (TOT), which essentially boils down to the notion that risk to systems and assets increase over time and need to be refreshed, due to deviations from the baseline. To reduce the operational risk over time, activities such as rotating credentials and replacing certificates will limit compromise and reuse.
ZTA is essentially asking us to authenticate and encrypt all traffic – end-to-end. Everywhere and anywhere. For ZTA to be implemented properly, encryption cannot simply be perimeter-based. Encryption is required at either the device or application. Endpoints should be configured to drop anything not encrypted. This is quite a tall order and has the potential to interrupt or completely break an operational process or technical mechanism depending on the implementation and environment. Justin Henderson from the SANS Institute does a great job going into further detail in his SEC 530 webcast seminar and provides further examples of leveraging your current technology stack to implement ZTA.
In summary, achieving Zero-Trust does not require adoption of any new technologies. It’s simply a new approach to cybersecurity to “never trust, always verify,” or to eliminate any and all trust, as opposed to the more common perimeter-based security approach that assumes user identities have not been compromised, all human actors are responsible and can be trusted. The concept of trusting anything internal to our networks is fundamentally flawed as evidenced by all the data breaches in the news, with most of the breaches caused by misuse of privileged credentials.
Around this time each year, many people aim to follow through on their New Year’s resolutions with the hope of finally being able to break that bad habit, which can prove trickier than we would like. Unfortunately, the same often holds true in our approach to cybersecurity. Despite repetitive cybersecurity reminders, time and time again, we fall back into old habits. However, the new year seems like the perfect time to try to convince you that those bad cybersecurity habits might not be so hard to break after all. Below are several patterns to break that can make a big difference.
Using Weak Passwords
123456, iloveyou and qwerty continued to be used as passwords in 2019 and, no surprise here, they continued to show up in data breaches. Consider using a password manager to make it easier to remember those really long, complex passwords you are going to be coming up with as part of your resolution. In addition, start enabling two-factor authentication as much as possible – yes, even for that random app you decided to try “just once.” If you already do this personally, encourage your company to start implementing new policies or revamping those old policies to match updated password recommendations.
Insufficient Vigilance with Phishing Emails
Fake attachments were on the rise in 2019 due to email filters only scanning the body of an email for phishing links, while social media networks and Office 365 became larger targets for phishing emails because of the amount and value of the information contained within them. To start off 2020, promote awareness of phishing email red flags with a fun graphic or create a regular test schedule for email phishing campaigns. For your personal benefit, take a free phishing IQ test to make sure you stay on top of your game.
Accessing Free or Public Wi-Fi
We continue to use free and public Wi-Fi because, well, it’s convenient. We use it on our phones to check social media, and employees continue to use it on their laptops to access work on the go. One of these next tips might just be the easiest New Year’s Resolution you’ve ever made: turn off AirDrop and file sharing, log out of sites when you leave them, and change your device settings to not automatically connect to available Wi-Fi networks. For those that may need to access confidential information, make sure you use VPN and install updates for apps and the operating system as soon as possible.
The best thing you can do to ring in 2020 is to continue educating your company and the people around you about cybersecurity best practices. Human error continues to be the biggest weakness in cybersecurity, but you never know when a New Year’s resolution might actually stick.
We know artificial intelligence will loom large in the new decade, and we know cybersecurity will be critically important as well. How those two forces intersect sets up as one of the most fascinating – and consequential – dynamics that will shape society’s well-being in the 2020s.
According to ISACA’s new Next Decade of Tech: Envisioning the 2020s research, cybersecurity is the area in which AI has the potential to have the most positive societal impact in the new decade, with areas such as healthcare, scientific research, customer service and manufacturing also among the top responses offered by the 5,000-plus global survey respondents. If that proves to the case, it will represent a giant step forward for security practitioners and the enterprises that they help to protect. The threat landscape has become too expansive and too sophisticated for most organizations to handle relying exclusively upon traditional approaches. There is no shortage of ways in which tapping AI can enhance enterprises’ security capabilities, and the applications are particularly promising when it comes to putting the vast security insights available from big data to good use. Leveraging these insights will prove vitally important across the spectrum of security teams’ responsibilities, allowing them to better identify threats and pinpoint anomalies that might otherwise have escaped human practitioners’ notice.
The increasing integration of AI and machine learning into cybersecurity is especially important because the well-document cybersecurity skills gap does not appear to be abating. In ISACA’s Next Decade of Tech research, only 18 percent of respondents expect the shortage of qualified cybersecurity practitioners to be mostly or entirely filled over the next decade, and the majority anticipate the gap will either widen or stay the same. Given that it routinely takes organizations several months or longer to fill open cybersecurity roles today, and the increasingly challenging threat landscape, this ongoing skills gap brings into focus how critical it will be for organizations to incorporate AI into their security tools and techniques. None of this is to say that we should give up on working to address the human skills gap, as people analyzing AI, providing appropriate direction around AI solutions and communicating security risks to executive leadership will all be as necessary as ever in the next decade. Rather, we must explore all avenues to bring more skilled professionals into the field, including a concerted push to address the underrepresentation of women in the security workforce.
The Security Battle of the Next Decade: AI vs. AI
There is little question that AI and machine learning will increasingly be deployed by enterprise security teams in the 2020s, and it will not be long before heavy reliance on AI for security purposes becomes mainstream. What is less certain, though, is if enterprises will become more adept at using AI than the cyber adversaries that they are attempting to thwart. Unfortunately, cybercriminals are also well aware of the impact that AI can make, and they often prove to be ahead of the curve compared to the security teams who often are spread thin protecting all of their organizations’ digital assets. The potential use cases for malicious AI in a security context are often dire. The ISACA survey results list attacks on critical infrastructure as the leading cause for concern from malicious AI attacks in the next decade, with other possibilities – such as social engineering attacks, data poisoning and AI attacks targeting the healthcare sector – also creating worrisome scenarios. With AI presenting potent ways in which to sharpen existing attack types and opening the door to devising entirely new forms of attacks, an already formidable threat environment is sure to become even more perilous due to AI-driven advancements that cybercriminals will be eager to embrace.
As we transition to a new decade, there is no more meaningful question on the security landscape than who will harness AI more effectively: cybersecurity professionals or cybercriminals. Enterprises should be actively exploring how AI can present new avenues to strengthen their cybersecurity teams while also putting in place the needed governance and risk management frameworks to be sure that AI deployments are implemented responsibly. Considering how prominently AI will factor into cybersecurity in the 2020s, organizations also will need to invest substantially in training to equip practitioners with the knowledge of how AI-based security tools work and the context of how they can be best applied to the current threat landscape. In a decade of security that will boil down to AI-driven threats versus AI-bolstered security, taking these measures provides the best opportunity for security practitioners to rise to the considerable challenge. And in the medium-to-long term, look for the discussion on AI to shift to our ability to control it, as well as AI’s ability to protect or attack based on well-defined and regulated ethics.
Editor’s note: This article originally appeared in CSO.
In the classic movie “The Wizard of Oz,” protagonist Dorothy Gale leaves Kansas and enters a new world, the land of Oz. While Oz is unfamiliar and unlike anything Dorothy has encountered before, she is able to navigate fairly well because she has a roadmap – the Yellow Brick Road. CISOs are not as fortunate as Dorothy. For CISOs, the expectations may be clear (from operational oversight to organizational politics to managing talent), but a roadmap to being effective in meeting those expectations is notably absent.
Given the timeliness of the topic of CISO effectiveness, the Security Leaders’ Summit at the 2019 Infosecurity ISACA North America Expo and Conference delved into recommendations that may help CISOs navigate challenges they may experience along their career paths. In his presentation, "CISO Leadership: Navigating Cybersecurity Leadership Challenges," Todd Fitzgerald with CISO Spotlight, LLC shared tactical as well as strategic approaches that may help CISOs create a roadmap to effectiveness. Tactically, Fitzgerald recommends that CISOs:
- Focus on where data is and how to protect it
- Help the enterprise gain competitive advantage by using technology such as AI, machine learning and cybersecurity analytics.
Strategically, Fitzgerald shared that if an enterprise has the philosophy that cybersecurity is everyone’s responsibility, all departments should map their roles to cybersecurity. In return, CISOs can ask what they can do to help departments ensure cyber health for the enterprise. As CISOs partner across their enterprises to gain competitive advantage through technology, Prasant Vadlamudi, director, technology GRC, Adobe, advised CISOs to remain cognizant of stakeholders’ expectations regarding use of emerging technology, particularly when taxpayer funds are involved.
Continuing with the strategic approaches that CISOs may use to navigate a roadmap to effectiveness, in his presentation, “CISOs in the Boardroom,” Vivek Shivananda, president, CyberSecurity Solutions, Galvanize, offered the recommendation that CISOs remain mindful of the board’s concerns: business interruption, reputational damage and breach of customer information. He continued to share that two different dashboards can be useful for CISOs: an internal dashboard that is more technically focused and a second dashboard that is more focused on business impact. In looking at metrics, Shivananda recommended that CISOs acknowledge and address the challenges of identifying what metrics to focus on, deciding how to address the data needs of many stakeholders, and reconciling when data exists from multiple sources.
In looking at the challenges CISOs face as enterprises gauge the CISO’s effectiveness, data was a recurring topic covered during the summit. Recommendations for CISOs on how to address these data-related challenges included knowing where data is located in order to best protect the data, and leveraging the data as the basis of dashboards that meet internal needs as well as board expectations. Beyond data, strategic recommendations covered at the summit included positioning cybersecurity as everyone’s responsibility and remaining mindful of the board’s concerns. These recommendations are not the visible Yellow Brick Road that Dorothy Gale had to guide her journey in the Land of Oz, but they do provide a roadmap that CISOs can use to navigate a path to effectiveness.
Every year has its share of security gaffes, breaches, and hacker “shenanigans.” As we enter into the new year, it is inevitable that we will see articles in the mainstream and trade press recapping the worst of them.
There are two reasons why these lists are so prevalent. The first is human nature: fear gets attention. Just like a product vendor using FUD (fear, uncertainty, doubt) to boost sales, so too can fear drive journalistic readership. So, it’s natural that the trade media would cover this. If we’re honest about it, there’s probably also an element of schadenfreude. High-stakes roles like assurance, governance, risk, and security are hard – and stressful. There’s an element of “thank goodness it wasn’t us” that happens to practitioners when reading about a breach that happened to some company other than our own.
All this is to be expected of course, but at some level when I see the inevitable year-end “breach recaps,” I feel like we’re missing an opportunity. Why? Because focus on the outcome alone leaves out an important part of the discussion – specifically, the lessons learned that inform how we can improve.
I’ll give you an example of what I mean. Say that I told you that the death toll from the Black Plague in the 14th century was about 50 million. Horrible, right? But does that give you any information about how to prevent disease? Measures to treat or diagnose them? Information about carriers or transmission vectors? No. Sure, a statistic like that is attention-grabbing ... but other than for a very small segment of practitioners (such as those doing pathogen statistical analysis), it doesn’t foster future disease prevention. If you spell out that the reasons for the rapid spread of the disease during this time period were (simplified of course) related to hygiene/sanitation, population density, and maritime trade, well that starts to tell you something that can inform prevention.
My point is, it’s useful to look at the worst/scariest events of the year to the extent that we draw out the lessons learned and takeaways that inform future efforts. With this in mind, and as a counterpoint to lists you might see in other venues, we’ve put together a list of five security “events” from this year that we think contain useful lessons. These aren’t the biggest breaches, or the scariest, or those with the biggest financial impact. Instead, these are the events that carry important lessons it behooves practitioners to learn from.
#1 – Facebook Account Data Compromised
The first one of these relates to the discovery of 540 million user records (about 150GB worth) from Facebook found exposed on the internet via several third-party companies. The root cause? Improperly secured S3 buckets. There are three important lessons here. The first and most obvious is the lesson about securing – and validating – permissions on cloud storage buckets (this is a big one). But there are other lessons beyond this. First, the importance of software (“application”) security. Tools like application threat modeling as we know can find and flag potential application design, configuration, or implementation issues early. Therefore, it remains an important tool in our security arsenal. Another lesson? The third-party angle – specifically, liability. In this case, the collection was facilitated by third parties and not Facebook itself; but yet, who is highlighted in the headlines? Facebook itself. Never forget that if you have the primary customer relationship, you ultimately wind up holding the bag.
#2 – Facebook Plaintext Passwords
The second one I’ve chosen to highlight is also from Facebook – in this case, the storage of Instagram passwords in the clear. Note that I’m not intentionally picking on Facebook by including them twice; in fact, this one is actually a “success story” for them (at least from a certain point of view). Specifically, in this case, a “routine security review” found passwords stored in plaintext. While analyzing and remediating that, they found more situations of passwords stored in log files as well. So, what’s the lesson? The main one I’d highlight is the value of technical assurance efforts, specifically the value of specifically validating cryptography use. These are areas often overlooked but that can have real, tangible security value. “Stuff happens” – and no company can ever do anything 100% perfectly all the time; but having a mechanism to find and fix those issues when they happen can mean the difference between minor egg on the face and major catastrophe.
#3 – Source Control Shenanigans
Next up, attackers downloading Git repositories, scrubbing them, and holding the source code for ransom. This event itself isn’t all that interesting from a tradecraft perspective: the vector here was run-of-the-mill account compromise (e.g., leaked or stolen passwords, API keys, etc.) What makes this interesting is the fact that it targets source control specifically. In days gone by, vetting source control platforms (both the configuration as well as whether they contain secrets like cryptographic keys or passwords) took a lot of time and occupied quite a bit of practitioner attention. As platforms become standardized and Git becomes ubiquitous, attention from practitioners can waver. Don’t let that happen. Staying vigilant about source control is still important – even in the GitHub era when everything is centralized and standardized.
#4 – Malware Fully Loaded
Germany’s BSI (Bundesamt für Sicherheit in der Informationstechnik) warned about Android smartphones coming “out of the box” with malware embedded (in this case, embedded in firmware). This isn’t the first time that we’ve seen phones (or other products for that matter) ship with malware pre-installed. It’s not even the first time we’ve seen BSI warning about stuff like this. It is, however, a great example of information sharing and the value of government keeping citizens’ information secured. The BSI is specifically chartered with warning people about security issues in technology products (section 7 of the BSI Act) and investigating products (see section 7a). The lesson? There’s a role that governments can play in ensuring the security of products sold within its jurisdiction, and that role can be highly effective.
#5 – Disgruntled
Last up, a dismissed IT staff member of a transportation service company was jailed for targeting and sabotaging his former employer’s AWS systems. Long story short, after he was let go back in 2016, he used an administrative account to begin systematically sabotaging and disabling AWS assets of his former employer; as a result, the company lost a few key customer contracts. There are a few lessons here. First, once again note the earlier lesson about securing cloud assets. Beyond hammering on that again, though, this event also highlights internal threats and privileged account use. We would all do well to maintain awareness of internal threats. As technology becomes more prevalent and more business-critical, a rogue or disgruntled employee (even former employee) has the potential to do significant damage. Likewise, cloud can make privileged accounts more complicated since it can add account types we didn’t have before (in addition to root and Administrator users, we now also have cloud administrator accounts to keep track of). Continued management, monitoring and protection of these accounts is important.
The increasing reliance on big data and the interconnection of devices through the Internet of Things (IoT) has created a broader scope for hackers to exploit. Now both small and large businesses have an even wider surface to work on protecting. Yet, all it takes is one new trick for an attacker to penetrate even the most sophisticated firewalls in a matter of seconds. The good news is that while, on the one hand, increased reliance on big data puts businesses at risk of cyberattacks, if used well, the same data can be used to enhance cybersecurity.
How Big Data Is Helping Cybersecurity
We are so used to the idea of protecting data that using it to bolster cybersecurity might not be top of mind. However, it's not only sensible, but also incredibly effective. According to the results of a study conducted by Bowie University, 84% of businesses using big data successfully managed to block cyber-attacks. What was their secret? Three words: big data analytics.
Big data analytics refers to the process of analyzing or assessing large, varied volumes of data that is often unexploited by regular analytics programs. The data can either be unstructured or semi-structured, and in some cases, it could be a mix of both. Initially, the aim of analyzing such data was to make data-driven decisions and determine customer preferences to improve operational efficiency and enhance client satisfaction. But now, data analytics is also being used to retrieve important information from big data, with the sole aim of strengthening cybersecurity. This is done by analyzing historical data to come up with better security threat controls.
By combining big data analytics and machine learning, businesses are now able to perform a thorough analysis of past and existing data and identify what's “normal.” Based on the results, they then use machine learning to strengthen their cybersecurity parameters so they can receive alerts whenever there's a deviation in the normal sequence of things, and consequently, thwart cybersecurity threats.
For instance, if big data analytics on past and existing data show that all employees log in an entity’s system at 8 in the morning and log off at 5 in the evening, the business will mark this as the standard and expected sequence of things. They will, therefore, come up with a way to prevent and get alerts any time there’s an attempted login before 8 a.m. or 5 p.m. This, in turn, can prevent potential hacks from happening. In a nutshell, carrying out a thorough analysis of historical data helps an organization identify its network’s regular patterns, so it can come up with solutions to detect and prevent deviations in real-time.
The Analysis of Current and Historical Data for Threat Visualization
By analyzing big data, businesses can foresee future attacks and come up with effective measures to prevent them. For instance, if a company is already a victim, carrying out a thorough analysis of the data of the events leading to the attack helps it identify the patterns followed by the hackers before they gained successful entry into the network. They can then use machine learning to formulate a solution that will ensure such a thing doesn't happen again.
Alternatively, if a business has never been attacked, it can use current and historical industry data to identify strategies used by hackers to attack other entities. Based on what it comes up with, it can then visualize what steps similar attackers would take to penetrate its system, and consequently, come up with a solution before they do.
While it’s true cyber-criminals do target big data while formulating their attacks, organizations can use the same data against them through data analytics and machine learning.
The dark web ecosystem continues to evolve as a place where cybercriminals can sell and access stolen data, purchase black-market items such as guns, drugs and hacking software, and connect with like-minded individuals. As is the case in any supply-and-demand scenario, since there remains a strong demand for these and other items, the dark web will remain a popular hub for the foreseeable future. That, in turn, puts security professionals and their enterprises in the position of needing to gain a deeper understanding of the dark web and how to mitigate its various risks.
In many cases, organizations have a long way to go in this regard. Even the name “dark web” connotes a taboo that, unfortunately, causes many organizations to shy away from giving this space the attention that it deserves. While there are areas of the dark web that need to be dealt with cautiously, the dark web’s basic contents, pathways and major risks should be well-understood by organizations’ security teams. Pursuing knowledge about cyber threats and cyber adversaries provides a baseline foundation for any successful cybersecurity program, so dismissing the dark web as either too dangerous, too far out of the mainstream or too complicated to merit attention does a disservice to the organizations that security professionals are responsible for protecting.
While there is a diverse range of threats organizations can face through the dark web, some relatively common ones include:
- The sale of customer data
- The sale of personal data (including medical/prescription data)
- Identity theft
- Credit card fraud
- Gift card fraud
As ISACA states in a briefing on the dark web, “All of these crimes can jeopardize an enterprise’s customers, partners and vendors; require significant investment to repair; and erode its reputation in the marketplace. Sadly, because of the anonymity and privacy of the darknet, most enterprises will not know when attacks are coming, what kinds of attacks they are likely to incur, where the attacks will likely originate nor who will be behind them.” Consider that for a moment: if enterprise security teams are unaware of these fundamental details, there is no chance that they can realistically thwart these attacks or be well-positioned to limit the resulting damage. It is difficult enough for security professionals to contend with the challenging threat landscape when they are actively monitoring and assessing threats; without that level of due diligence, security teams are inviting disaster.
It probably is not necessary for all members of security teams to be experts on the dark web, but it would be advantageous to have at least one team member be highly knowledgeable, and for other members of the team to have enough familiarity to be able to deal with specific incidents that demand attention. Pen testers, who can benefit from gaining knowledge of new attack methods, and incident responders, who stand to benefit from insights related to their investigations, might find it especially beneficial to become attuned to certain forums and activities on the dark web. If it is not realistic for smaller teams to have dark web-savvy practitioners on staff, then engaging third-party expertise can provide a viable alternative.
While the dark web accounts for a relatively small percentage of all content on the internet, it is a vast enough space that organizations are unable to actively monitor all, or even most, of material on the dark web. However, by prioritizing high-impact risks, there is much to be gained in pinpointing key areas of the dark web to regularly monitor. Exactly what those areas are will vary from organization to organization, depending on the nature of its business and customer profile, but some likely starting points are applicable dark web forums (where discussions take place highlighting vulnerabilities and attack methods) and black markets (a commerce-focused area where stolen data can be browsed and purchased). It is important to bear in mind, however, that the dark web is no place for security professionals in the private sector to engage with criminals. That is the territory of police and other law enforcement agencies, as it would be dangerous to ignore that cyber criminals are people who also act in the physical world.
The old saying that ignorance is bliss might apply in some cases, but that approach is counterproductive when it comes to dealing with nefarious activity on the dark web. The reality is there is a high volume of activity on the dark web, including many activities, transactions and schemes that could have a direct impact on enterprises and their customers. It is understandable that security teams already feel like they are spread thin with their business as usual responsibilities, and the concept of proactively taking on a new frontier such as the dark web might seem like an intimidating course of action. However, operating as if what transpires on the dark web is outside of a security team’s scope is a failure to provide the due diligence that boards of directors and organizational leaders expect from their security teams. The dark web is an important source of knowledge for security professionals in order to understand both the threats and attack practices of cyber adversaries.
Editor's note: This article originally appeared in CSO.
Deborah Juhnke, senior consultant with Information Governance Group LLC, cited a definition of information governance as “an organization’s coordinated, interdisciplinary approach to satisfying information compliance requirements and managing information risks while optimizing information.”
Accomplishing all of that can be a tall order, even overwhelmingly so, acknowledged Juhnke in her session, “Information Governance – The Foundation of Information Security,” that took place today at the Infosecurity-ISACA North America Expo and Conference in New York City.
Considering the size of the challenge, Juhnke encouraged practitioners to identify some “low-hanging fruit” to start with, while remaining mindful of the bigger picture.
“While on the one hand being overwhelmed with everything is bad, at the same time, you kind of need to think of the whole picture to then drill down on a particular issue,” Juhnke said. “That’s the challenge is seeing the big picture but not being overwhelmed by the big picture, and still being able to then focus down and say we have all the elements and all the stakeholders, now let’s get those stakeholders together and see what we want to tackle first.
“It might not be your problem, it might be this [person]’s problem, but let’s figure something out and go after it, and show some success.”
Juhnke asked attendees how many of them go back to read emails from 2002, making the point that managing a glut of outdated data can be unnecessary and counterproductive.
“Look carefully at your email rules, if you have any, to find out how to best contain the creation and retention of email,” she said. “Secondary to that would be to find any archives of email and be sure that you can clean those up.”
By the same token, Juhnke said fileshare services, if unchecked, can pose tricky governance challenges, so they present another opportunity to streamline governance priorities.
“Make some effort at sifting through them to figure out what’s there,” Juhnke said. “There tend to be ways of going through that where you can just take a chainsaw to some of it because it’s just so old – well, that's gone. Then you maybe need to be a little more careful at the next level and maybe get some user involvement at that point, but work your way through those fileshares to see if you can clean that out, and at the same time, give [users] a new model.”
In general, having too much information leads to a lack of efficiency and can present compliance challenges, so being judicious when it comes to classifying and storing data is critically important.
Other key takeaways from Juhnke’s presentation included:
- Unmanaged, unstructured data increases footprint for compromise.
- There are ample regulatory and legal drivers for change.
- Security standards support improved information governance.
- Engaging multiple stakeholders enhances the argument for change.
- Triage and disposition will diminish the attack surface and improve compliance.
Editor’s note: For additional coverage of the Infosecurity-ISACA North America Expo and Conference, see the latest “Off-Stage and Off-Script episodes of the ISACA Podcast.”
Though device manufacturers have worked to improve the cybersecurity of their medical devices, there is still a long way to go. Improvements aside, there are distinct steps the IT information security department can take to reduce risk and improve cybersecurity for medical devices.
Some notable improvements in cybersecurity for medical devices in recent years include:
1. US Food & Drug Administration (FDA) guidance to manufacturers that operating system updates, especially to patch vulnerabilities, do not require the manufacturer to go back through the entire FDA approval process.
In the past, medical device manufacturers often claimed that they could not update or patch the operating system due to FDA requirements. Some vendors legitimately believed this, while others may have used it as a convenient excuse to avoid the hard work of updating and testing medical device operating systems.
2. Medical device manufacturers have become better able to allow anti-virus (AV) software to run on medical devices or systems.
Many medical device systems previously could not run anti-virus software for a variety of reasons. Often, the AV software misinterpreted the actions of the medical device software and would sometimes interrupt the operation of the device. In some cases, this could cause serious patient harm. Device manufacturers have been working to ensure their systems could run with anti-virus software without impacting the functionality of the device.
3. Newer medical devices use modern operating systems and some support virtualization of certain functions.
Using modern operating systems (i.e., still supported by manufacturers such as Microsoft) ensures these systems can be updated and patched, especially for updates that address new and emerging operating system vulnerabilities. In addition, being able to transition to virtual the server functions for many medical devices allows the systems to be managed using modern tools and techniques. These reduce cybersecurity risks.
While these developments are excellent, they do not address the millions of existing devices in healthcare environments today that were deployed prior to these changes being made. In every hospital and clinic today, there are thousands of devices running older operating systems that cannot be patched and that cannot be run with an anti-virus or anti-malware solution. Information security professionals need to address these devices using tried-and-true methods.
1. Segment medical devices on VLANs.
By putting all medical devices on restricted VLANs, you can ensure that only devices you specifically provide access are able to communicate across the network. While this does not protect the network from malicious control of those medical devices, it restricts the exposure of those devices.
2. Implement firewalls to secure systems that cannot be protected with other methods.
Firewalling off those devices is another important aspect of securing medical devices. When operating systems cannot be updated or patched, putting them behind firewalls with very restricted and defined access can reduce risk significantly.
3. Educate your IT staff about the importance of managing these devices (network and server) securely.
Often IT staff are responsible for managing and maintaining all servers, including those for medical devices and systems such as patient monitoring, OR systems, or Cath Lab systems. Modern systems often support virtualization so updating and maintaining these systems often falls to the IT staff. It’s critical that these staff understand the unique cybersecurity risks of these systems and are able to identify potential malicious activity quickly. Additionally, these staff need to clearly understand the operational impact of server maintenance on clinical operations to prevent potential patient harm (i.e., don’t take the OR server down while cases are underway).
4. Educate your biomedical and diagnostic imaging staff about cybersecurity, especially as it pertains to medical devices.
Your biomedical equipment technician (BMET) and diagnostic imaging staff need to be educated both on basic cybersecurity and the specific vulnerabilities of the systems they support. They don’t need to become cybersecurity experts, but they need to understand the work they do and how it impacts (and is impacted by) cybersecurity. For example, they need to understand how to securely connect devices to the network or update anti-virus software (if permissible), or to restrict access to administrator accounts on these systems. Working with your IT cybersecurity staff is a great way to cross-train both teams on this critical knowledge.
5. Educate your end users on the basics of cybersecurity and teach them potential warning signs to look for on their medical devices.
Your end users of medical devices are typically nurses and patient care techs and specialty techs (such as radiology technicians). These are the people who work with these medical devices daily, and they should be aware of basic cybersecurity risks of the equipment as well as best practices in handling these devices. Perhaps most importantly, they should understand what potential malicious behavior would look like on those devices and have a fast, easy method for reporting potential issues so that cybersecurity experts can examine and contain any potential malicious activity.
Malware on medical devices can create patient harm, including death. It can also infect the wider network and potentially take down critical systems like the Electronic Medical Record (EMR) software or business systems such as payroll or HR. Working to expand your cybersecurity team to include BMET, DI and end user staff will help reduce medical device cyber risk.
Traditionally, an organization’s Chief Information Security Officer (CISO) and Chief Marketing Officer (CMO) haven’t had significant overlap when it comes to day-to-day roles and responsibilities. The CMO focuses efforts on brand growth and marketing strategy. The CISO, on the other hand, has been more focused on architectural efficiency, reliability and security.
Today, data is the lifeblood of business. Businesses have access to copious amounts of consumer data that can be leveraged to gain a better understanding of their market and customer base. To the CMO, this is a gold mine – more detailed insight into the wants, needs, habits and activities of their target demographics. These can result in initiatives with large scopes and larger budgets. On the flip side, the CISO sees the red flags and vulnerabilities that come along with this information. Privacy and security threats, technological limitations, and reputational risk are all on the radar. Commonly their response is to reel the scope back in to reduce risk and budget. As you may expect, this can result in internal friction as to who is truly responsible for the management of this data, making it more important than ever for the CISO and CMO to establish an effective working relationship.
In order for your organization to best capitalize on the benefits of big data, the CISO and CMO must work together cohesively. This can be a challenge initially, as the two not only have different objectives when it comes to the use of data, but also in their ability to effectively communicate and understand the other’s perspective. In an effort to establish this relationship effectively, there are critical steps that should be taken to avoid setbacks or breakdowns in communication:
Establish Common Short- and Long-Term Goals
This one may seem obvious, but it’s likely the most critical aspect of the relationship’s foundation. Each side will have objectives they are looking to meet, and those objectives likely steer in opposite directions (especially when it comes to the budget). Where the CMO will be looking for more data points and more access, the CISO will be looking for stronger protections and stricter access control. Rarely, if ever, are the two sides going to have aligned perspectives on what should be prioritized. To avoid issues and breakdowns in the relationship, establish long-term business goals and intermediary milestones to ensure that both sides are working toward a common goal.
Break Down the Communication Barrier
Anyone working within the IT realm has seen it. You start explaining the details of an issue or a project. You try to keep it simple, avoiding technical terms and acronyms as much as possible, but then you notice glazed-over eyes and nodding responses. You could be using completely made-up terminology for all they know. If others are going to be expected to understand your perspective on things, they will need to understand the language, especially when it comes to security. The same goes for those within IT trying to understand marketing jargon and methodologies. Breaking down these barriers by educating the other team(s) on the basic terminology and approaches can go a long way to increasing the effectiveness of the relationship.
In addition to simply breaking down the language barrier, having a better understanding of mindsets and concerns will result in bringing better proposals to the table. Identifying beforehand the information and reasoning that will be valuable to the discussion for outside groups will result in conversations that are more open and productive. What is a security framework? Why does working in a cloud environment present different risks and challenges? Why are these data points relevant to marketing? Things that may seem simple and obvious to you may not be so clear-cut to others.
This may mean that an intermediary party with a better understanding of both sides is needed to facilitate the conversations. Establishing common ground and ensuring that there is nothing lost in translation is an important part of creating a functional and effective relationship.
Establish a Communication Plan
As with any relationship, communication is key. Establishing a recurring sit-down or planning session together will help to ensure that any new ideas or needs are on the radar and the appropriate considerations can be given from both sides. The frequency should be determined based on the volume of work being performed together or upcoming goals and milestones that are expected to be met. If an intermediary is brought into the fold, they should be part of these sessions as well. These sessions should serve as a chance for each side to better understand the wants, needs and challenges the other is facing.
As the business world continues to shift, the lines within the traditional organizational charts will continue to blur. Establishing effective relationships between all departments and layers of an organization is critical. Taking steps to ensure that those relationships are open and reciprocal will help to generate success not only for those parties, but for the organization as a whole.