The task of establishing and configuring audit policies is usually left to security experts and/or system administrators who are in charge of implementing security configurations, particularly in small-to-medium enterprises with a lean IT structure. There is usually not much guidance on how these configurations are to be managed.
One common mistake that administrators make is failing to define adequate audit trails to enable early detection of security threats and allow for related investigations. The main reason for this oversight is a failure to balance audit trail needs and systems capacity. Some administrators argue that excessive auditing results in production of huge amounts of event logs that are unmanageable. Deciding on what to audit and what not to audit, or what may or may not be omitted, is therefore not just a configuration task, but rather a risk assessment task that should be embedded in the governance structures of the organization’s IT security frameworks.
Risk assessment process over audit requirements
The audit needs of the organization are guided by the regulations, security threat models, information required for investigations and IT security policy to which the organization is subjected. Identification of the possible threats that the organization faces is usually carried out as part of risk assessment. Security events derived from audit policy settings are key risk indicators that the organization should use to measure how vulnerable the system is to the identified threats. It is therefore critical that enabling audit policies should not be taken casually.
System auditing should be considered across the platforms the organization uses – that is, operating systems, databases and applications. Due consideration of what information is obtained from the operating system (OS) against databases and/or applications should be used to streamline the volume of audit data collected and to safeguard servers’ storage capacity. Where the organization decides not to record audit trails at any of the system levels – that is, OS, databases or applications – an impact analysis should be carried out to ensure that the costs of missing such logs are quantified against regulation penalties and organizational risk appetite.
In order to facilitate the systematic review of an organization's audit needs, guidelines should be developed and approved at the appropriate level in accordance to the governance structure of each organization. Having a guideline that outlines the audit policy objectives, risks, threats and data collection points will ensure that adequate audit logs are maintained. This will, in turn, facilitate log monitoring for suspicious events and allow for detailed investigations if the need arises.
The guidelines should not only focus on configuration of audit settings but should also provide guidance on the steps that are to be followed when procuring log management software to manage event logs. Different log management software is designed to meet logging needs of different organizations, and as such, software procured should be in line with the audit objectives and needs of the organization. The one-size-fits-all concept should not be applied.
The configurations of audit policies across the organization platforms should be a secondary task implemented through clear guidelines that promote risk assessment of the organization’s audit needs.
I’m a classic introvert. Early in my IT career, I had no interest in networking with others. I did not see the tangible benefits or understand how networking could be useful to advancing my career interests.
After some time, I realized that I wasn’t connecting with the people inside and outside of my organization to a degree that allowed me to advance socially or professionally. So I challenged myself and made a conscious effort to change my behavior in order to first know my co-workers better and gain useful contacts in the industry as a byproduct. The benefits have been tremendous. Networking has led to recent job promotions, salary increases and further development of the Adam Kohnke brand.
As an IT Auditor, public speaking with various audiences is a daily routine. In my experience, professional networking sharpens your verbal communication and presentation skills through increased interaction with other individuals paired with certain social components. The "networking avoidance" mindset is pervasive today in many young professionals and corporate culture, as the value of professional networking is not something that we are usually taught by our parents, in school or even at our jobs. It is often, sadly, a learned behavior one discovers alone.
This blog post seeks to share some of my experience with professional networking and some tricks I’ve learned over the last two years of actively practicing.
- Identify a specific target audience and goal. Technically speaking, every person, company and institution is a viable target for your efforts, but your networking efforts should be specifically focused on advancing toward your professional and personal goals. Want to get into IT Security? Begin your networking efforts towards contacting IT Security professionals, managers or executives that work in your industry or in a company in which you have interest. Want to learn more about software development? See if a college instructor or student in the field is willing to sit down for a cup of coffee to share notes and ideas on this topic. There’s no limit on where networking can take you or who you can access, so don’t create limits for yourself!
- Start with platforms and an outreach message that are comfortable for you. Email and online platforms like LinkedIn are simple and effective methods to start with in order to break the ice and help you gain confidence with your networking approach. These methods provide increased flexibility, more time to experiment and broader coverage versus a random cold call. The downside is that some people do not appreciate the impersonal nature that comes with these electronic approaches, so as you gain comfort in your approach, start adding personal methods into the mix such as lunches, cold calls or the cubical drive-by. When using electronic methods, keep the message short and to the point. Formally track your results to know what’s working and adjust accordingly.
- Start small, but be persistent. Networking takes time, effort and some willingness on the other individuals’ part. Touching again on Point 1 above, it usually helps to identify a small group of people (around 12 individuals or less) you have an interest in to see how effective your approaches and methods are. Your response rate will usually vary, but if going above this number, you may suddenly find you have 20 lunch dates in the near future, and what was supposed to be a fun networking exercise turns into a stressful chore. You may also initially not hear back from anyone. It does happen, but understand that it might not have been the right time for that individual. Revisit that contact later or just move on to the next one. There is never going to be a shortage of potential contacts, so don’t give up!
- Offer something in return for the other party’s time. A recent experience with a networking contact of mine revolved around improving my own networking efforts and results. This person is experienced in networking, and I sought the contact’s advice. Following completion of our conversation, I asked if I could offer something in return. I ended up issuing a recommendation on LinkedIn that reflected our positive interaction on the subject. It can be something – you could offer to share audit techniques or other applicable industry knowledge in return for that individual’s time. What goes around comes around!
As internal auditors, we’ve seen an uptick in usage of the term “Agile” in reference to how more and more companies are developing software. Agile software development has grown increasingly popular as both software and non-software companies transition from traditional development methodologies, such as the waterfall model, to a value-driven Agile approach. Like any auditable area, this requires internal auditors to understand the key concepts, evaluate the risks and determine how to effectively audit the process based on pre-defined objectives. However, that’s not the purpose of this blog post. What we auditors find even more intriguing is how the values and principles behind Agile software development apply to the field of internal auditing.
The Agile foundation
Agile is an overarching term for various software development methods and tools, such as Scrum and Scaled Agile Framework (SAFe), that share a common value system. Developed in 2001, the Agile Manifesto provides a set of fundamental principles that Agile teams and their leaders embrace to successfully develop software with agility. Companies that have adopted Agile development practices recognize the urgency to adapt quickly to changing technology and deliver enterprise-class software in a short amount of time; otherwise, they run the risk of becoming extinct.
Some of the top benefits of agile development include:
- Accelerated product delivery
- Improved project visibility
- Increased team productivity
- Better management of changing priorities
Why apply Agile to internal audit?
At The Mako Group, we have found that applying Agile concepts to the internal audit function is not a new concept, but has never been more crucial than in our current environment. Like the companies we aspire to protect through objective assurance and advice, internal audit must be able to address emerging critical risks and provide relevant insight in a timely fashion. Despite our best intentions, many audit departments still develop a long-term plan that cannot be easily changed and often employ antiquated audit methodologies. If we truly want to add significant organizational value and be a trusted partner with management, internal auditing must evolve, and Agile techniques can help us do that.
Agile internal audit tactics
Just as companies are scaling Agile software development based on the size, capabilities and culture of the organization, the extent of an internal audit function’s agility will vary widely for one group versus another. Nonetheless, we have narrowed our focus to three key areas that every internal audit department should consider when becoming more agile:
- Planning and prioritizing. Agile development teams utilize a backlog as the single authoritative source of work items to be completed, which must be continually prioritized. Items on the backlog are removed if they no longer contribute to the goal of a product or release; whereas, items are added to the backlog if at any time a new essential task or feature becomes known. Similarly, the internal audit function should maintain a backlog of areas to be audited that is regularly evaluated and updated based on risk exposure. Instead of committing to a rigid audit plan, this approach allows for timely inclusion of new risks or auditable areas throughout the year. The importance of collaborating with stakeholders during the planning and prioritization process cannot be overstated. Before beginning work on a task or feature in the backlog, explicit and visible acceptance criteria must be defined based on end user requirements, which is called the definition of ready. This is met for an item on the audit backlog when internal audit has the necessary resources available and agrees with the stakeholders up front on the scope, the goal of the project and the value to be delivered.
- Streamlining the process. Iterations are one of the basic building blocks of Agile development. Also known as a sprint, each iteration is a standard period of time, usually from one to four weeks, during which an Agile team delivers incremental value in the form of usable and tested software. Ultimately, items that move off the backlog must be divided into a series of sprints, which provide a structure and cadence for the work. In the context of internal auditing, the fieldwork associated with an audit should be broken into fixed-length activities that are appropriately sized to promote the motivation of a tight deadline without stressing the resources in place. As the goal is to be quick and iterative, versus confined to a pre-determined plan, eliminating unnecessary resources and efforts is instrumental to an audit team’s successful completion of the work within a sprint. Whenever possible, gathering evidence independently, which also alleviates the burden on stakeholders, is an excellent way for internal auditors to be more efficient. Moreover, examples of waste in the audit process commonly include:
- Distributing requests for evidence that are too vague.
- Sending emails back and forth when a phone call or in-person meeting would be a more productive solution.
- Exhaustively explaining every step taken without considering that concise documentation could achieve the same effect.
- Soliciting continuous feedback. One of the most commonly practiced Agile techniques is a daily stand-up meeting, normally lasting no longer than 15 minutes, in which an Agile development team discusses each member’s contributions and any obstacles. To be truly effective, internal audit team members must regularly check in with each other and not hesitate to raise questions or issues as soon as they come up. Rather than waiting until the fieldwork has been completed to start internal reviews, quality assurance should be built into the daily audit activities.
Furthermore, internal auditors must not wait until the end of an audit to provide results. Early and frequent communication with stakeholders means that the final report or presentation should simply reflect a visual summary of the insights already discussed. We should not only identify opportunities to enhance an organization’s operations but also continuously improve our own audit processes. A crucial role on an Agile team to help foster an environment of high performance and relentless improvement is the scrum master. Acting as the coach of an internal audit team, a scrum master would ensure that the agreed Agile process is followed and encourage a good relationship among team members as well as with others outside the team.
Microsoft Exchange is one of the primary solutions organizations use to provide email services for medium and large organizations. Exchange directly serves as an information transport mechanism and indirectly as a storage medium for organizational data in the form of attachments and email message content. This blog post seeks to cover a high-level subset of some audit considerations surrounding an Exchange 2010 and newer environment to help your organization assess whether proper oversight and controls exist to limit the likelihood of unauthorized information disclosure, disposal or modification.
The Security Access Groups. Exchange privileged access is typically associated solely to the Exchange Administrators group. Starting in Exchange 2010, Microsoft developed an internal Role-Based Access Control scheme that provides additional AD security groups with varying degrees of elevated permissions and rights. For example, members of the Server Management group can modify certain properties of any Exchange Server in the environment. Members of the Organization Management group are essentially an Exchange Admin, just without rights to perform mailbox searches. A total of 11 built-in Exchange Role Based Access groups should be considered for review as it relates to privileged access. The Exchange Administrator group is the sum of all 11 role-based access groups.
Monitoring Group Membership. Exchange comes with 12 privileged security groups (Exchange Administrators and 11 built-in role groups). Your ability to promptly detect and respond in a timely manner to the membership changes of these groups can be useful in a variety of ways. First, this may allow you to proactively identify recon or insider threat-based attacks if processes are in place to monitor and alert when sensitive groups additions occur. A manual alert follow-up may indicate an account addition was unauthorized or associated to an external threat. Secondly, removals from sensitive Exchange groups may be an indicator of a threat agent attempting to lock you out of your systems or prevent your ability to administer the environment prior to launching or following a successful cyberattack.
Auditing Administrator Actions. Exchange provides built-in administrator logging functions, allowing commands or actions performed by privileged users to be captured for review. The logging can be redirected to SIEMs or other repositories for independent and secure analysis. The potential need for this function lies in some of the rights available to privileged Exchange users such as the ‘SendAs’ right, which allows an email to be sent by ‘User A’ while appearing to have come from ‘User B.’ Oh what fun you could have with ‘SendAs’ rights! Admin logging can also capture if hard and soft deletes were issued against another user’s mailbox (think the C-Suite) or if deleted items have been recovered. Check administrator logging status in your environment by issuing the Get-AdminAuditLogConfig | Select *audit* command from the Exchange Administrator shell.
Auditing Mailbox Use. Exchange also provides a mailbox auditing capability, providing a more granular view into a specific user’s mailbox. Using mailbox auditing in conjunction with administrator logging is typically sufficient to provide adequate audit coverage, as Exchange allows administrators with an option to set audit bypass on particular mailboxes which may allow particular admin actions to go unnoticed for extended periods of time. Mailbox auditing serves as a primary mechanism to identify mailbox abuse perpetrated by Exchange privileged users.
eDiscovery and Data Holds. Exchange allows administrators to place litigation holds on data contained in its repository to prevent deletion and to perform item-specific searches across multiple mailboxes. Monitoring when these features are enabled or disabled may allow organizations to identify when users with privileged access are attempting to electronically dumpster dive, perform recon by recovering deleted emails, or cover up unsanctioned actions by disabling data or litigation holds placed on corporate data. Controlling access to and monitoring eDiscovery should be a key control consideration.
I’m here to let you know about a new Perspective that I’ve created for the ISACA audience.
The Perspective article is titled Reasonable Software Security Engineering, and there are two key messages. The first is that software is eating the world. This isn’t my message; it’s that of venture capitalist Marc Andreessen, who uses the phrase to emphasize just how much software is being created and how critical it is to every business. The second is that products are less relevant to defense than how you create that software. As the software that runs your business is now custom, your defenses need to be built in.
What this means is transformational for businesses. It’s transformational for security professionals. They need new skills, new tools and new processes. It’s transformational for audit professionals because, for the sorts of documents they’ll view, the assurances they’ll need to check are going to change.
The Perspective article is designed to provide a high-level overview and actionable steps you can take to start adjusting to this new world.
Author’s note: To view Shostack’s insights on threat modeling, visit https://adam.shostack.org/blog/category/threat-modeling/.
Artificial intelligence and machine learning are growing at a very fast rate, exceeding the growth of any other technology. The vast benefits along with the potential for associated catastrophic perils created by the impending advancement of AI requires lots of deliberation by security professionals like us.
To propel that thought process, a great report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation was written by a group of distinguished authors from prestigious institutions such as Future of Humanity Institute, University of Oxford and University of Cambridge, to name a few. They have come together to share their pearls of wisdom with remarkable alacrity. The contents could not have been more in context and relevant for the future of cybersecurity.
The report wonderfully articulates and explains how AI will impact the existing landscape of threats, and, in particular, the possible following consequences:
- It will accelerate the scalability of attacks, lowering the cost of attacks;
- New attack forms may arise that are otherwise impractical for humans to perform;
- More effective attacks will develop that are finely targeted and that potentially exploit vulnerabilities in AI systems.
The report has considered three important security domains and illustrates possible changes to threat scenarios within these domains:
Digital security. The use of AI will enable labor-intensive attacks to be accomplished much more easily, as well as exploit human vulnerabilities through the use of speech synthesis for impersonation and the exploit of vulnerabilities of AI systems.
Physical security. Increased attacks through drones and other autonomous weapon systems, plus attacks that subvert physical systems, such as causing autonomous vehicles to crash.
Political security. AI will enable quick analysis of mass collected data, creating targeted propaganda and manipulating videos that will expand threats associated with privacy invasion and social media manipulation. The ability to confidently process correct conclusions about human behavior, moods and beliefs on the basis of available data will undermine the abilities of democracies to sustain truthful public debates.
The report makes well-researched recommendations to prevent and mitigate the risks arising from malicious use of AI, including:
- Policy-makers should work jointly with technical researchers to investigate, prevent and mitigate potential malicious uses of AI. The policy interventions should be around privacy protection, coordinated use of AI for public-good security, monitoring of AI systems and resources.
- Researchers and engineers should look into both use and abuse cases of their work and reach out proactively to the relevant actors for harmful applications. The organizations and the researchers carry a huge responsibility of having proper education and following ethical processes, standards and norms.
- Best practices should be identified in research areas with more mature methods of addressing use and abuse concerns addressing computer security. There should be risk assessment in technical areas of special concern, openness in research and promotion of safety and security.
- Involve more stakeholders and domain experts in discussion of these challenges. There should be red-teaming, formal verification, responsible disclosure of AI vulnerabilities and the use of related security tools and secure hardware.
Additionally, the report offers a rich bibliography and materials for future research. It is a must-read for every cybersecurity professional.
What could ISACA’s role be in navigating this road map of rapid growth and evolution of AI, and in advocating measures to maximize the benefits to society?
ISACA’s core strength is its expert body of knowledge in the governance of enterprise IT (GEIT), with implications in audit, risk and compliance. Therefore, ISACA should actively participate in the setting of frameworks and best practices for contending with AI, assisting the industry in arriving at standards for safe and ethical practices and determining methods of detections and remediation of vulnerabilities in AI systems.
ISACA can play a very important role in connecting industry, academia and regulatory bodies. These efforts will not only enable and help these communities to reap and maximize the benefits of AI, but also to prevent and mitigate malicious uses and risks arising from this very important and imminent technological advancement.
Author’s note: The views expressed in this post are the author’s views and do not represent any of the professional bodies with which he is associated.
Anyone who has a swimming pool – or a neighbor with a pool – is probably familiar with the term “attractive nuisance” under US tort law. In layman’s terms, an attractive nuisance is something that may attract children but could potentially harm them. If a child is harmed, the owner of the attractive nuisance may be held liable.
I do not think that employees are children or childlike – but I wonder if email is the corporate equivalent of an attractive nuisance. When employees click on links in emails from unknown parties, even when security awareness training advises otherwise, is it due to the same curiosity that drives a child to sneak onto a construction site or climb a neighbor’s fence to gain access to a pool? Whether click-happy email behavior stems from curiosity or inattentiveness, the prevalence of phishing or social engineering attacks on email tips the scales away from “attractive” and more toward “potential” nuisance.
The use of emails as a favored vector for disseminating malware puts a spotlight on the ubiquitous platform that email runs on, Exchange Server. Server security and availability are primary considerations. In its Microsoft Exchange Server 2016 Audit/Assurance Program, ISACA has addressed these areas through providing configuration and deployment tests, role-based access control, performance, logging, and backup and recovery. The purpose of the audit program is to assist IT auditors in their assessments of deployments of Microsoft Exchange Server 2016.
Email exploitation is generally included when organizations expend time and resources on creating a culture of security. These efforts frequently start with an information security training program. While this is a great start, there appears to be a disconnect between information security training and user behavior. Therefore, creating a culture of security should include an assessment of training effectiveness, such as the use of phishing simulations. Additionally, security can be supported by frequent reinforcement of best practices in “tapas fashion” rather than “firehose fashion.” That is, present smaller periodic training segments rather than a longer annual session that may offer too much information at one time for some users.
Reliance on email is firmly institutionalized. It’s convenient. By extension, reliance on Microsoft Servers to support email and other Outlook functions, such as meeting scheduling, creation of task lists and contact records, continues to require significant efforts to ensure availability and security of sensitive information communicated through and stored by email. In this environment, coupling Exchange Server security and a culture of security go a long way toward ensuring email does not become a security nuisance.
Since I first began building internet firewalls in the late 1980s, I have periodically encountered claims that “the perimeter is dead” or “firewalls don’t work.” These claims are rather obviously wrong: your firewall or perimeter are simply a way of separating things so you can organize them better. An internet firewall is an organizing principle between “stuff that’s not your problem” (the internet) and “stuff that’s your problem” (your network).
At a finer level of detail, you might apply other organizing principles such as “my data center” and “the unmanaged cloud of desktops” or “our PCI cloud.” If you think of firewalls or perimeters as a way of organizing the various entities you deal with, you’ll be able to better understand your strategic objectives for where data moves, how it moves and where it sits. Without that type of organization, the idea of a network that is “yours” is purely imaginary.
If you think about firewalls and perimeters as an organizing principle, you’ll be able to see how single servers can be a “cloud of one” whether they’re on premise or off, and you can think about the trust relationships between remote servers and internal services. It’s a valuable mental tool, in other words.
We (or rather management) also can make mistakes by forgetting there is a persistent management cost for design. Organizing your computers and thinking about where data moves and how it is stored is expensive. It takes understanding and thought to design this stuff, and if it’s not done right, you wind up with a mess. A typical mess might be: “everything can talk to everything,” which is certainly easy to set up, requires no ongoing management, and is – for all intents and purposes – impossible to secure. It seems to me that a lot of executives expect tremendous cost-savings from moving to the cloud, but they don’t realize that you still need good systems people (to manage the cloud systems using the cloud providers’ interfaces) and governance/analysis (to think about where your data is moving and why). In other words, the thinking is the hard part.
Beyond security, it’s important to think about performance and reliability. If you figure out where your most important servers and data are, you can optimize your network architecture to guarantee best performance where it needs to be. Otherwise, in an “everything can talk to everything” network, your only option for performance tuning is to make everything faster. That’s an important distinction to keep in mind as we collectively move to software-defined networks. The organizing principle that leads to securing your data is also the organizing principle that allows you to optimize your data paths.
A senior IT person at a large enterprise told me, “We have web services all over the place. We use a vulnerability scanner to identify systems that are offering up data on port 80, then we track them down and analyze them.” Think about that for a second! If the organization has a purely reactive governance model like this, how will that enterprise move to a high-performance software-defined network? To map out your performance requirements, you need to know where the data is going to flow. You cannot do that if you’re permanently reverse-engineering your design using what I call “forensic network architecture.”
When we talk about disaster recovery or data backups, the same reasoning applies: you can’t back up your data if you don’t know where it is (organizing principle: data perimeter), and you can’t identify which systems need to be recoverable/reliable if you don’t know which they are (organizing principle: data center perimeter). None of this is a new problem, but, unfortunately, a lot of organizations are going to keep kicking the can down the road, so they can preserve their hard-won ignorance about what’s going on inside their perimeter.
Editor’s note: For more of Marcus Ranum’s insights on this topic, download The Vaguely Defined Perimeter.
IT auditors can act as strategic but independent partners to businesses currently working toward compliance with the European Union General Data Protection Regulation (GDPR), scheduled to come into enforcement on 25 May 2018.
Executive management increasingly expects the audit function to add more value to the business as a subject matter expert in all areas of risk management, as well as by supporting key business objectives and strategic initiatives. GDPR compliance is fundamentally a risk management exercise, which the audit function is well equipped to support.
Technology breaks down organizational silos
GDPR requirements require attention and remediation expertise from various functions within the business, including human resources, legal, compliance, marketing, communications and IT. For compliance efforts to succeed, the unintentional walls that often exist between these functions need to be broken.
While GDPR compliance is not solely a technology issue, technology acts as a common denominator across business processes and plays a significant role in the collection, processing, storage and transfer of personal data. This is the reason IT auditors in particular can use their overarching view of technology across the organisation to highlight interdependencies and gaps in GDPR compliance efforts.
In addition to supporting a robust control environment, IT auditors can act as risk consultants while maintaining their auditor independence.
During remediation activity made necessary by GDPR compliance, IT auditors should establish strategic partnerships within the business through:
- Leveraging their understanding of the technology landscape to provide a big picture view of data risk beyond individual remediation workstreams;
- Highlighting control interdependencies and escalating potential control design gaps through early identification;
- Advocating for data privacy risk to be considered and prioritized within IT transformation activities.
Below are five examples of GDPR compliance workstreams and technology domains where IT audit can add value by providing an independent view.
1. Data Protection Impact Assessments (DPIA)
IT auditors acting as subject matter experts can help facilitate discussions so that the risks and impact of processing personal data are considered as early as possible when initiating new IT projects or vendor relationships.
The early identification of data protection risks through DPIA exercises is a significant step for successful implementation of privacy-by-design within:
- The existing data processing estate;
- In-flight IT projects (development and acquisition); and
- Future technologies and longer-term IT changes.
Beyond merely satisfying compliance requirements, IT auditors should help the business take a longer-term view by institutionalising data protection impact assessments (Article 35) and fostering new ways of thinking about the impact of privacy on data processing activities.
2. Data Governance and Data Flows
Organizations (data controllers and data processors) must demonstrate their compliance with GDPR by maintaining records of processing activities under their responsibility and implementing technical and organizational measures (Article 32).
This requirement aligns perfectly with the main objective of data governance – to ensure the management of data as a strategic business asset in order to derive maximum value.
Effective data governance involves understanding data flows within business processes and ensuring the stewardship of data through activities such as developing data architectures, implementing quality management, data integration and meta-data management.
As organizations develop and maintain records of their personal data processing, IT auditors can provide a view on data flow mapping activities. Key questions to ask business representatives include:
- What personal data items are being collected and in what formats?
- At what point in the data flow is lawful processing of personal data determined?
- Can storage locations and formats easily facilitate the enforcement of data subject rights, including subject access requests, right-to-erasure, rectification and portability?
IT auditors can help facilitate evaluations of the completeness of data flows by sharing good practices from their experience in mapping business processes during scoping activity.
3. Risk-Based Data Protection Controls
While it may be tempting to rush toward implementing encryption and pseudonymisation as solutions to data protection, it is important to question whether these controls are necessary in the first place (see GDPR Recital 28). Other protection strategies might be more appropriate, depending on the risk.
Where a risk assessment determines that pseudonymization is required as a method of data protection, IT auditors can help the business consider whether:
- System design permits the attribution of pseudonymized data to natural persons (data subjects) through inadvertent data enrichment;
- Domain segregation is applied to separate attribution data from pseudonymized data; and
- Access to meta-data is appropriately restricted.
By challenging the business to consider the real risks to data, it is possible to arrive at pragmatic solutions for data protection, which may include applying controls like pseudonymization.
4. Big Data and Machine Learning
According to the EU Agency for Network and Information Security (ENISA), “The extensive collection and further processing of personal information in the context of big data analytics has given rise to serious privacy concerns, especially relating to wide scale electronic surveillance, profiling, and disclosure of private data.”
While unlocking the business value of data is a critical part of any digital agenda, businesses must thoroughly consider the potential impact on data subjects from unfair/biased data models, inaccurate analysis and prediction of future events (such as using methods such as machine learning), and profiling (Article 22).
IT auditors can challenge data scientists within their organizations to consider questions such as:
- Fairness: How do you ensure that big data algorithms are not repurposed in unexpected ways to draw unexpected conclusions about data subjects?
- Data minimization: How do you avoid excessive data collection, manage data retention (including secondary uses of data) and guarantee data subject rights?
- Data protection: How do you ensure privacy enhancing technologies (PETs) are designed by default into big data solutions?
5. Data Processing in the Cloud
While IT auditors’ focus on cloud computing is not new, GDPR compliance requires renewed attention on data processing performed by third parties, including cloud service providers (CSPs).
Data privacy/protection-related control considerations for cloud-based data processing include:
- Maintaining accurate records of cloud-based processing;
- Establishing data processing location controls within cloud architectures;
- Ownership of master keys for encrypting data-at-rest and data-in-transit;
- Contractual definitions of controller, processor and sub-processor responsibilities; and
- CSP support for the enforcement of data subject rights (e.g., right-to-erasure).
Rather than a sprint to the finish line, organizations must see GDPR compliance as a marathon toward the goal of institutionalizing data privacy and data protection in the corporate culture. IT auditors can support this cultural change by looking beyond annual IT audit calendars and one-off GDPR-related audit engagements.
Through early and consistent engagement with the business through conversations, training and workshops, the IT audit function can mature from its traditional focus as a control watchdog to become a strategic business partner supporting longer-term organizational objectives.
We live in a world full of risk, and nowhere is risk more prevalent than in technology.
The Center of Internet Security (CIS) has recommended 20 critical security controls to respond to threats and vulnerabilities associated with the internet. The premise is that proper implementation of these controls will mitigate the risks of damage, unauthorized alteration or theft of information and technology assets. However, when it comes to risk mitigation, how much is enough? How much reduction of risk is required? In other words, what is the risk appetite of the enterprise?
This varies from company to company depending on multiple factors, such as the industry in which it operates, the type of service or product provided, the current economic climate and companies’ financial position. Risk appetite also depends on the overall risk landscape. As evidenced by a continual wave of news reports, the cyber arena is full of threats designed to steal, destroy, alter or simply gain unauthorized access to information assets.
In this digital world, it stands to reason that managements are more and more cognizant of cyber threats that endanger their assets. Managing these risks could benefit immensely from a cybersecurity audit. While the CIS Controls Audit/Assurance Program is not designed to provide assurance beyond the security program of an enterprise, the controls are presented in a prioritized fashion to assist the enterprise in leveraging its potentially limited resources to protect key assets and realize the most benefit.
The purpose of an audit is to assess the efficiency and effectiveness of current controls and provide a level of assurance that assets are adequately protected and accessible to authorized users when needed.
To ensure proper safeguards are in place, management should not rely solely on the CIS Controls IS Audit/Assurance Program. Audits of other pertinent operational processes should take place. A holistic approach is necessary and requires a strategic partnership between the board of directors, senior management, IT and functional business units, and audit. While the board of directors provides guidance and direction, management is responsible for executing based on those directives. This holistic approach can result in the creation and implementation of policies and processes that are designed for business value, as well as the security of all company assets.