With less than 100 days to 25 May, many organizations outside the European Union have the same question: “Does the General Data Protection Regulation (GDPR) apply to my organization?”
The answer has to be “it depends” – although this is an answer that no one likes. You cannot immediately say yes or no. Instead, you need to take a step-by-step approach to identify the requirements of GDPR, the organization’s connection with the personal data of EU citizens and consult an attorney specializing in GDPR as needed. The answer to this question can only be given based on an analysis of the organization’s operations and usage of personal data, based on Article 3, which defines territorial scope. This article is really important for organizations outside of the EU to determine whether they need to adhere to GDPR. The article states that organizations must comply with GDPR if they offer goods or services to EU citizens, even without payment, or monitor behavior of EU citizens (data subjects). In today’s digital world, these practices are not rare.
The starting point should be to determine whether the organization processes personal data of EU citizens, either as a controller or a processor of data, or whether a part of your organization operates within EU borders. If the answer to one of these questions is yes, then it does not matter where your business headquarters are located. As long you are in the “place where Member State law applies by virtue of public international law,” you need to comply with GDPR.
To help guide this process, organization should perform a data protection impact assessment as a required element of GDPR. This is an initial step in determining the need to comply with GDPR in the process of GDPR implementation. Once the organization determines that it has to comply with the regulation, the compliance program must include all parts of data processing. Data processing “includes the collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction of personal data.” GDPR applies to both automated and manual data processing.
The organization being impacted by GDPR needs to assess, implement and comply with specific GDPR requirements. These requirements will impact the entire organization and how day-to-day operations are being conducted with respect to personal data. New processes and controls should be implemented to protect personal data of EU citizens and also to protect the organization from liabilities caused by non-compliance with GDPR.
Organizations that see 25 May not only as a deadline, but more as the starting point of a long-lasting GDPR compliance program, will have an advantage in processing personal data applying GDPR principles. Organizations should use this moment as an opportunity to implement best practices and realize benefits from GDPR.
Editor’s note: ISACA’s Implementing the General Data Protection Regulation publication is an educational resource for privacy and other interested professionals; it is not legal or professional advice. Consult a qualified attorney on any specific legal question, problem or other matter. ISACA assumes no responsibility for the information contained in this publication and disclaims all liability with respect to the publication. 2018 © ISACA. All rights reserved. For additional ISACA resources on GDPR, visit www.isaca.org/GDPR.
The purpose of the General Data Privacy Regulation (GDPR) is to harmonize the data privacy regulations that each European Union member state implemented to comply with GDPR’s predecessor. GDPR provides a single, comprehensive regulation that is compulsory for all organizations processing the personal data of individuals living within the European Union.
The regulation becomes enforceable on 25 May 2018, after a two-year grace period to allow organizations to implement GDPR. GDPR substantially increases data subjects’ rights – and with penalties of up to 4% of gross turnover, the regulation has the potential to fundamentally change the way organizations view and process personal data. That said, the purpose of this blog post is not to tell you what GDPR is, who it will impact, nor to pour more oil on the fear-mongering flames. Over the past two years, most of us have seen more than enough of these types of articles from privacy experts. I am writing today to introduce ISACA’s new GDPR guide.
Six months ago, ISACA brought together a team of information technology, information security, audit and data privacy professionals from around the world to help develop a guide that provides a pragmatic approach to implementing GDPR in organizations large and small. This guide provides a comprehensive introduction to GDPR, along with a plan to help organizations implement a data privacy program that complies with GDPR requirements.
The guide also includes the available information from the Article 29 Data Protection Working Party (WP 29), which provides clarification on various topics covered in the regulation. WP 29 guidance, where available, has been included within ISACA’s GDPR guide. At 100 pages, the guide can be easily read in a weekend. It will serve as a handy guide both during the implementation of your data privacy program, as well as a solid reference during your day-to day-activities.
The guide provides advice on topics such as identifying and classifying personal data, data governance, information security, managing compliance in your supply chain, data breaches, employee awareness and more. The guide also includes several annexes that provide specific recommendations to help practitioners implement an effective and efficient data privacy program. Annex 1 is divided into nine domains that cover 46 processes organizations should implement as part of their GDPR programs. Annex 2 provides guidance on how to set up and manage the Data Privacy Impact Assessment (DPIA) process. Annex 3 provides a sample personal data register that must be created, maintained and readily available in the event of an audit. Throughout the document, we have defined common data privacy terminology and included a glossary of terms that we suggest you ensure are correctly used within your organization to avoid confusion.
The ultimate purpose of the guide is not simply to help organizations become GDPR compliant, but also to ensure the privacy of real people. To this end, we stress that the comprehensiveness of your data privacy program should be based on the risk to the subjects’ data that you hold and not solely on the risk to your organization.
ISACA’s GDPR Working Group believes that implementing GDPR will not only reduce the risks to your organization, partners and customers, but also has the potential to improve the effectiveness of your organization through the implementation of sound policies and processes. Many of us on the working group are privacy practitioners who will use the guide to help implement GDPR in our organizations. This will allow us to see first-hand what worked well and what could be improved. Stay tuned to this space, as we will provide regular updates as we count down to 25 May. Once we’ve received sufficient feedback, we will review and update the guide. In the meantime, we hope this guide is beneficial to you and your organization.
News of medical device security flaws are increasingly in the news. Consider the announcement from the U.S. Food & Drug Administration last year about a flaw in one model of a St. Jude Medical implantable pacemaker. This was subsequently covered in more than 14,000 published reports to date. Thirty-four different individuals sent me a message soon after the news broke, asking if I had heard about the approximately 750,000 pacemakers of this specific model that had significant security vulnerabilities. Many reports about other types of wirelessly connected medical device flaws occurred prior to that, and more have been reported in the few months since.
Medical devices are integral parts of hospital networks
According to various estimates from research organizations – and healthcare CISOs I chatted with at the Detroit SecureWorld event last fall, where I delivered a keynote about medical devices – anywhere from 30-70% of medical devices within hospitals and clinics are smart”... digitally connected to smartphones, the internet, clinic networks, directly to other devices, etc. These large numbers of medical devices attached to healthcare networks increase the possibilities for a wide range of security and privacy incidents to occur through exploiting their vulnerabilities – especially from and through the medical devices that have no legitimate security controls engineered within them.
Security and privacy incidents can occur due to various factors, such as:
- Malicious outsider intent - hackers who use such things as ransomware, DDoS bots and other malware to shut down and disrupt network availability, exfiltrate and/or modify data, delete data, etc.
- Malicious insider intent - inappropriately accessing patient data, using patient data for identity fraud and other crimes, selling patient data to criminals, etc.
- Mistakes - input errors, programming errors, accidentally opening access to unauthorized individuals, etc.
- Unintended consequences resulting from lack of planning - attaching smart medical devices to the network that the anti-malware software views as malicious, and subsequently shuts off, creating a denial of service as a result of data volume going beyond bandwidth capabilities, etc.
- Lack of personnel information security and privacy awareness, which can lead to all the previous examples, in addition to knowingly taking actions that result in privacy breaches, data modification, patient harm, etc.
Security complexity requires multiple layers of controls
Some changes to medical devices can be done remotely. Some need to be done in proximity using near field communication (NFC) protocols. However, I’ve communicated with too many in the medical device industry who have expressed belief, or claimed, that using NFC is a 100% solution for security. When I asked upon three different occasions in 2017 about the security of their newly announced medical devices, representatives (IT security VPs/management) from each of three different large medical device manufacturers told me, “We use NFC, so security is not an issue.” When I explained that if medical devices attach via NFC to computers that are part of a network, then basically any other node on that network may be able to get to the medical device through that network connection, such as through control settings necessary for network functions, or through the use of discovery tools such as Shodan, each of the medical device representatives stopped communicating with me. Avoiding a security risk discussion does not solve the associated security risk.
Lack of planning and integrating with networks and systems can shut down medical devices, sometimes during operations. There have already been medical devices used for performing operations, such as heart procedures, that shut down as a result of an anti-virus scan. Or, the time a nurse tried charging her cellphone using the USB port in an anesthesia machine; it shut down the machine. I could provide a hundred additional examples. If medical device manufacturers do not improve the security engineering of their medical devices, security incidents will increase, along with privacy breaches and patient harm.
Medical device security concerns are justified
Healthcare providers (doctors, nurses and surgeons) are concerned. Rightly so. Flawed devices negatively impact their ability to assure patients they are providing them with safe devices that will help, and not potentially harm, them.
Healthcare information security practitioners (CISOs, CIOs, VPs, managers, etc.) are concerned. And for good reason. Security flaws within medical devices create vulnerabilities to data and functioning not only within the devices themselves, but also to the networks to which they are attached, and other devices on the networks.
Healthcare IT auditors are concerned. And they should be. Insufficient medical device security controls are compliance violations for growing numbers of regulations, laws and contractual requirements, in addition to facilities’ own posted privacy and security notices, which contain promises to which they are legally bound.
Healthcare regulators are increasingly concerned. Justifiably so. They are accountable for ensuring information security and privacy regulations are followed. When regulators see more reports of medical device security flaws and vulnerabilities, they are going to become more proactive to pressure medical device-makers to improve security controls, and to pressure device users to ensure devices are implemented with appropriate security.
Patients are concerned. Of course. Their lives could be at stake.
Dedicate 2018 to improving medical device security
As Data Privacy Day approaches this Sunday, here’s a recommendation for those in the medical device space (manufacturers, engineers, and vendors). Make it a goal in 2018 to successfully establish effective and practical information security controls within your devices. Stop telling hospitals and clinics that it is not practical for you to do this. It is actually more practical, and will significantly improve security protections for those using medical devices, to build the security controls into the devices from the start. This idea is supported by not only those in the information security profession, but also by the FDA and other regulators.
This will not let healthcare data security practitioners off the hook. Even if medical device creators improve the security of their devices, healthcare IT and security practitioners will still need to remain diligent to ensure the security of those devices in how they are connected to their networks, the control settings to access them, and the management of the data that comes from them. But improved device security will support these efforts.
Establish your baseline for current levels of medical device security now. Then, in December of this year, determine if and where there have been improvements, or if data security, privacy and patient protections have actually degraded. It all depends upon where medical device companies decide to place their priorities.
The implications of GDPR have become a popular topic of conversation in the information security and privacy communities. Now that we have arrived in 2018, expect those discussions to become all the more prevalent in advance of the May enforcement deadline.
In a panel discussion at ISACA’s CSX Europe conference, experts from ISACA, IAPP and ENISA joined together to provide their insights on GDPR and how to prepare. Watch the video, and in less than five minutes, come away better prepared to engage colleagues and fellow practitioners in this ongoing dialogue.
ISACA has produced additional GDPR resources to help prepare its global professional community for this high-impact regulation, with more on the way in the coming weeks, including an upcoming e-book with extensive guidance on implementing GDPR.
In my last post, I spoke about the Internet of Things (IoT) in terms of trust, security and privacy at a high level. Here, I will take a deeper dive in terms of how IoT security and privacy can impact an ecosystem interconnect.
When we talk about IoT, we think about the process we implement as we migrate to sensor-driven infrastructure for automated processes.
Looking at economies and technology ramp-up trends from a financial perspective, we will expect that there with be standardization around policies and processes, as well as implementing interfaces that are expected to connect sensors to networks, platforms, and application systems, or a combination of services.
It can all appear to be complex and large scale, especially in the borderless world of IoT. However, if as security and privacy professionals we ask ourselves, “What are the major areas we should focus on?,” my perspective is that we will have to look at:
- Device security and settings
- Security device and system physical access (IAM)
- Securing our communication network systems
- Dealing with the large volume of data we will have to process, leveraging big data analytics, risk scoring and criticality metrics aligned to a system, user privilege, and the business functionality.
IoT PriSec Model
The team at The Cyber Policy and Security Governance Institute have been developing an IoT PriSec Model. This model:
- Combines best of breed practices based on network, system and application security, which integrates functionality to meet data security lifecycle expectations as well as data privacy requirements for in-border and cross-border migrations.
- Is built on the premise that an IoT infrastructure ecosystem consists of a self-healing, secure network infrastructure and systems that exfiltrates data for analysis from system-system connects and sub-system interactions. This system will have a big data capability to build an analysis of permitted, potentially dangerous and malicious activities, allowing for event-driven capabilities, driving a mindset of adaptive security.
- Will be further enhanced to adapt to blockchain technologies.
- Integrates privacy definitions that are tied into the IAM and privilege access management which is tightly tracked and auditable.
- Promotes an effective combination of cryptography and smart analytics integrated into sensor security mechanisms which can quickly assess, measure and score attack attempts and attack paths for smart attack detection.
One area that will have an impact on IoT environments, given that the growth of cloud and big data are enablers of IoT, is that of unikernel security.
In the paper “Unikernels: Library Operating Systems for the Cloud,” A. Madhavapeddy and team describe a unikernel as follows: “In the context of virtual machines and cloud computing, it makes sense to describe the whole virtual machine as a unikernel.”
Bratterud, Happe and Duncan presented a paper on “Enhancing Cloud Security and Privacy: The Unikernel Solution,” which lists six observations exhibited by Unikernel systems as follows:
- Choice of service isolation mechanism
- The concept of reduced software attack surface
- The use of a single address space, shared between service and kernel
- No shell by default, and the impact on debugging and forensics
- Microservices architecture and immutable infrastructure
- Single thread by default
In a following piece, I will present further details on this aspect, as well as other areas that we are seeing leading IoT vendors focus on from a security and privacy best practice perspective.
The European Union has long considered that a person owns all non-public data about him. Each individual then explicitly grants and revokes rights to process (for example: collect, analyze, aggregate and store) his or her personal data to everyone interested.
With some data, it is easy. One signs a contract, and later on, perhaps cancels the contract, along with permissions to process the data. But the question is not only about granting or revocation of rights to process, but also about getting to know which data is stored, how it was processed, with whom it was shared, and having the possibility to remove that data from systems (i.e., to be forgotten).
Data in the physical world leave some traces, and even more in the digital world. Each of our digital activities touches many systems: computers, servers, information systems, transmission systems, security systems, usage analysis systems, and so on. Moreover, not all of these traces are under contractual relationships due to complexity of interaction between systems, as well as due to usability.
Information systems and the Internet were designed mostly respecting another model – that the owner of the system owns the data as well, unless it is specifically provisioned otherwise.
The new EU General Data Protection Regulation (GDPR) threatens everyone globally that processes data of clients and EU residents with big fines if the EU approach of data owning is not respected. ISACA recently issued a new publication GDPR Data Protection Impact Assessments – What Does It Mean to Me, providing guidance to practitioners and their organizations for how to deal with these considerations.
Despite all the difficulties, I would argue that implementation of the new regulation brings a lot of benefits to all those involved in IT governance, such as:
- Organizations are forced to inventory all their digital assets, and start managing them better. In such way, more resilience against cyberattacks or mismanagement can be created.
- IT staff are forced to talk and understand legal teams, discuss the impact, and better understand threat landscapes and liabilities, which shrinks gaps of understanding.
- The true cost of automizing will be calculated better. Right now, it is still often calculated in only the cost of designing, deploying and running costs of information systems. Now, the securing of information systems, data and information system life-cycling, and the creating, processing, destroying, auditing, handing over and disposing of data will be assessed.
- A new profession with a clear mandate and responsibilities will be brought into most organizations. The Data Protection Officer (DPO) will provide extra help in IT governance.
Overall, GDPR has the potential to be one of the pillar forces that gets us together to address cyber security properly. While it alone will not be sufficient, combined with other governance and regulatory efforts, real progress can be made.
GDPR (General Data Protection Regulation) introduces the new role of Data Protection Officer (DPO). While many organizations have had the title of such a role under the existing EU Directive, member states had different interpretations of what this meant. GDPR takes the responsibilities of the DPO to another level.
To be able to effectively discharge the duties of the DPO, as outlined in Articles 38 and 39 of GDPR, the DPO needs to have a high authority in their organization, have a wide range of experience and be multiskilled, both technically and socially.
The requirement to appoint a DPO will mainly fall upon large corporations, government bodies, organizations in the health and social care sectors, financial institutions, and mostly organizations that are based in the EU.
However, small and medium enterprises (SMEs) may also need a DPO role, as they could be a key component in a large corporate or government organization’s supply chain. These cases probably will not be a dedicated role, and could even be brought in as a managed service.
Also for the first time, an organization acting as an information processor under an outsourced, managed service, such as a cloud service provider arrangement, may need to consider the role of DPO.
This all means there is going to be a large requirement to recruit DPOs. There are many job adverts out there requiring X number of years of GDPR experience, but these people simply do not exist. Yes, there are many data privacy professionals out there, but the requirements of the GDPR go beyond this.
So, what makes a good DPO?
The DPO needs a mix of skills and experience extending from data privacy into information risk management, relationship management, persuasive/negotiating skills, and the ability to operate at the highest levels within an organization. DPOs will need to be able to effectively communicate across the whole of the organization with the ability to articulate potential risk, in business terms. The DPO needs to understand the risk to information and how to appropriately and adequately protect this information related to its level of risk, through people, processes and technology; related governance processes; and management controls.
The DPO’s initial primary focus will be to get his or her organization ready to be GDPR-compliant by the May 2018 deadline, when GDPR becomes enforceable. This will require engagement with all areas of the organization to obtain a good understanding of the information, gathered, processed, stored and shared, with particular attention on Personal Identifiable Information (PII).
However, once the DPO has the organization GDPR-ready, the DPO can add real business value by taking a wider view into information governance. With this in mind, larger organizations should seriously consider developing the DPO role in to the role of the Chief Data Officer (CDO).
Many of the skills and standing within an organization required belong to that of a Chief Data Officer (CDO). While the role of the CDO is wider than that of the DPO, there are many similarities.
To sum up, there is massive requirement to recruit DPOs with GDPR experience. As GDPR is only in its implementation phase, these people do not exist in the numbers required. Therefore, organizations need to take a more pragmatic view. Look at existing data protection professionals; can they be developed into the role of the DPO with training and coaching? Look at information risk and information governance professionals; can they be trained in data privacy? For the large corporates, look at the role of Chief Data Officer, and for SMEs, look at buying a managed service.
There has been a lot written over the past year or so about the EU General Data Protection Regulation (GDPR) – what is required, and what needs to be accomplished sooner rather than later in order to meet the May 25, 2018 compliance date. And with 99 articles, with hundreds of requirements within them, covered within the GDPR, there are certainly many topics that must be addressed.
While seven to eight months may seem like a long time to address them all, it is important for those responsible for GDPR compliance activities to realize that some of those activities will necessarily take many weeks of planning and preparation, and then most likely many additional weeks of actual implementation.
One case in point is performing a GDPR compliant data protection impact assessment (DPIA). I’ve heard and read a variety of statements made about DPIAs over the past several months, and I want to correct and clarify a few of the ones that I’ve heard that have been especially of concern.
- “I’ve already done a privacy impact assessment (PIA), so I’ve got the GDPR DPIA requirement already taken care of!” Wait; not so fast. While I view a DPIA as a specific type of PIA (this is debated by various fans and foes of both the DPIA and PIAs, but in my experience doing more than 100 PIAs and more than a dozen DPIAs, I believe that, with the outliers that can be effectively addressed, a DPIA fits well within the larger PIA domain), a DPIA does have important and significant differences to a traditional PIA. So, don’t think that just since you’ve done a PIA at some point within the past year or two that you’ve met all the GDPR requirements for a DPIA.
- “Our lawyers told us it was a legal activity, and that IT, privacy and information security folks don’t need to bother with worrying about doing a DPIA.” While you need to do a DPIA to meet GDPR legal requirements, the answers that you will need to provide will typically not be known to the legal department, so it will need to include involvement from key stakeholders in IT, information security and privacy. And if you’re waiting for the legal department to contact you to ask you simple yes and no questions, keep in mind that many of the answers you will need to provide for an accurate and acceptable DPIA will not be yes or no. To determine as accurate of risk level calculations as possible, you must provide more detailed descriptions for where you are at with accomplishing activities.
- “I got a free 10-question GDPR readiness checklist from the Internet, so I’ll use that for my DPIA.” That is a dangerous, and incorrect, belief. A checklist is not an assessment. I am a fan of using checklists to keep track of progress with projects to make sure that I have addressed specific topics fully, to answer truly yes/no types of dichotomy questions, or to cross off the items I wanted to get at the grocery store after I put each in my cart. An assessment is not an either/or dichotomy. An assessment includes doing various types and levels of analysis, and arriving to an assessment that could be any number of answers, representing a multitude of risk levels. GDPR checklists that boil down the high-level activities you need to do can be helpful; a checklist, though, will not accomplish the necessary GDPR required DPIA.
A significant purpose for requiring organizations to conduct DPIAs is to identify and reduce the data protection risks within projects, networks and systems, reduce the likelihood of privacy harms to data subjects, and to determine the levels to which all of the applicable 99 GDPR articles have been implemented by the organization. Traditional PIAs have not fully addressed consideration of harms to data subjects (but that is important for all to address whether or not it is for DPIA), and certainly traditional PIAs did not look at the specific DPIA requirements that are unique from traditional PIA topics covered.
To the specific point of performing a DPIA, I recommend that organizations use a framework that not only addresses and meets the GDPR requirements, but can also meet other requirements for performing other types of privacy impact assessments. I’ve created a PIA framework, based upon the ISACA Privacy Principles, which consolidates similar privacy principle requirements and topics into the 14 ISACA Privacy Principles, and maps all the DPIA requirements within them, in addition to those DPIA questions also mapping to other standards, frameworks and regulatory data protection requirements.
I will go over the associated methodology on 28 September at the “How to Perform GDPR Data Protection Impact Assessments” ISACA webinar (www.isaca.org/Education/Online-Learning/Pages/Webinar-How-to-Perform-GDPR-Data-Protection-Impact-Assessments.aspx), and will also point to a spreadsheet I created for ISACA members to use for performing DPIAs, as well as a new version of an automated DPIA tool I created for ISACA to make available to members.
I hope you can join me!
Privacy has had its Chernobyl moment.
Maybe it was when a foreign power stole everything every American had submitted for a clearance form from the Office of Personnel Management. Maybe it was when an insurer lost control of the health records of millions of Americans. Maybe it was when the United Kingdom spilled its child benefit data. Maybe it was when India created a biometric ID system and sort of forgot about controls.
However you want to define a privacy Chernobyl, it, or something like it, has happened.
We exist in a world where our expectation of privacy has been shattered, diminished and demeaned, and yet privacy invasions still outrage us. What we haven’t done is built a cap, and certainly not a sarcophagus that’s designed to protect the radioactive slag for an appropriately long time.
Privacy failures still make the news. Failures on the part of firms who have promised to take it seriously still result in 20-year consent decrees. (Recall that 20 years ago, in 1997, Alta Vista was still the dominant search engine, the Motorola flip phone was dominant amongst those weirdos who bothered with a cellphone, and 56k was pretty good internet connectivity through your phone line. Will word choices that seem agreeable today be sensible after 20 more years of technological acceleration?)
I want to encourage you to use Implementing a Privacy Protection Program: Using COBIT 5 Enablers With the ISACA Privacy Principles as a way for you to realize that personal data is radioactive, and you want to start treating it as such. If you accumulate too much, you risk a meltdown, but even when you have it in small doses, you want to be intentional about it. You want to know why it’s here, how you’re protecting it, and how to get rid of it when the risk exceeds the reward.
You should be thinking of ISACA’s new privacy protection guidance as an important move forward in your privacy journey. It’s a necessary step, and going through the steps will help you understand if there’s more that you need to do.
Editor’s note: Additional privacy-related guidance can be found in ISACA’s new white paper, Adopting GDPR Using COBIT 5.
About Adam Shostack: Adam is a consultant, entrepreneur, technologist, author and game designer. He's a member of the BlackHat Review Board, and helped found the CVE and many other things. He's currently helping a variety of organizations improve their security, and advising and mentoring startups as a Mach37 Star Mentor. While at Microsoft, he drove the Autorun fix into Windows Update, was the lead designer of the SDL Threat Modeling Tool v3 and created the "Elevation of Privilege" game. Adam is the author of "Threat Modeling: Designing for Security," and the co-author of "The New School of Information Security."
Most of the people I speak to about GDPR are struggling with two main things.
The first one is how to interpret the GDPR text, specifically on issues like consent or new privacy rights like the “right to restrict processing,” the “right to oppose profiling,” or the scope of the “right to data portability.” The other is where to start, given the lack of detailed guidance on practical implementation.
I think these two are interlinked and have to be addressed together and simultaneously. In other words, I believe you should approach the GDPR program as a whole, and not try to separate out into different aspects or outsource the program in its entirety as some of the people I’m speaking with are doing.
My business leaders, data owners, IT architects and the CIO have all been badgering me for clear guidance or definitive policy statements, which is really hard when the GDPR text is very oblique and vague on the ‘what’ and ‘how,’ and there is no regulatory guidance or case law yet. They want absolutes – like a rule book or PCI. They want hard facts with yes or no answers. Well, this simply is not possible.
In the past, I turned to lawyers, who kept on telling me “it depends,” which is no good when you need to provide definitive or strategic direction. So instead, we got down into the weeds of the text, and I worked night and day with my in-house lawyer, a solutions architect and really good privacy analysts. Between us, we developed the GDPR Framework and the Privacy Playbook.
The GDPR Framework is like it sounds, a concept model – a framework by which the architects and business could start to consider from a system or process perspective the impacts of “the minimum rules.” The Privacy Playbook allowed us the flexibility to develop, amend, collaborate and interpret the text and conduct ‘what if’ scenarios that helped shape crunch decisions that were needed by the business, so that they could get on with business planning (impact vs risk). The decisions were captured as policy decisions, to ensure the full impact of changes could be considered and absorbed by the business.
So far, this collaborative approach has worked out well, as now we are drafting a consolidated version of the Playbook – with the minimum outcomes necessary to comply. We have completed the discovery exercise to understand the current proliferation of key data sets, and we are considering the full implications (and options) of what ‘good’ GDPR compliance looks like.
The board is now on board, and the path to compliance is clearer to get us to our compliance milestone of May 2018.
One thing is for sure, the only way to get there is by taking one step at a time.
Editor’s note: For more on GDPR, register for the 14 September webinar, “How to Jump Start GDPR with Identity & Access Management.”