Other Blogs
There are no items in this list.
Knowledge & Insights > ISACA Now > Categories
Controls in the Cloud – Moving Over Isn't As Easy As Flipping a Switch

Shane O'DonnellLift and shift.

While this phrase is not new, it’s now said with regularity in relation to moving infrastructure to the cloud. Providers promise seamless transitions as if you were moving a server from one rack to another right next door. While moving to the cloud can put companies in a more secure position, proper care needs to be taken. Assuming everything is the same can be a fatal mistake, one that is happening on a regular basis.

No-brainers
From a physical security perspective, moving infrastructure to the cloud will almost always be more secure. Large cloud providers place infrastructure in state-of-the-art data centers with top-of-the-line physical security measures. Organizations do not often have the budget, time, or expertise to build their own on-premise data centers to these specifications. I have seen the full spectrum of data centers over the years (umbrellas over server racks as a control to protect from a leaky roof, anyone?). Even the most advanced data centers we see on premise do not match those of the large cloud providers.

What hasn’t changed
Requirements and basic control concepts have not changed as the proliferation of cloud infrastructure unfolds. User access, change management, and firewalls are all still there. Control frameworks such as COBIT, ISO 27001, NIST CSF, and the CIS controls still apply and have great value. Sarbanes-Oxley controls are still a driver of security practices for public companies.

What has changed
How the controls of the past are performed has changed upon moving to the cloud. Here are some common examples:

Security administration is more in-depth. Some of the most high-risk access roles in organizations, admin rights, are a main target of malicious actors. Handling admin rights in the cloud is different and needs proper due care. Knowing which roles are administrative in nature can be confusing, so it’s important to implement correctly from the start. Separation of duties in relation to key administration and key usage is essential. Having the proper tools to administer access can be daunting. Don’t assume your cloud provider will guide you through all these intricacies; plan ahead.

Perimeter security has changed. While layered security always has been important, it becomes even more important in the cloud. Recently, several news stories have appeared where breaches occur due to things like “containers being exposed to the internet” with a large cloud provider’s name associated. At first blush, I have heard most people blame the cloud provider, but most often these breaches are the cloud customer’s fault. Some important items to think about are proper DMZs for critical and/or regulated data, firewall configurations, and proper restriction of admin rights to those resources.

Securing connectivity becomes more important. Servers and other hardware won’t be sitting down the hall when moving infrastructure to the cloud. Access will almost always be remote, thus creating new security challenges. Understanding all ingress and egress points is essential, as is putting proper controls around them.

Encryption. Encrypting data will be a top concern for many organizations, as the data is now “somewhere else.” The good news is the native encryption tools of many large cloud providers are advanced, and most times data at rest can be automatically encrypted using a strong algorithm. This is a huge step up right off the bat for many companies. Because encryption is so important in the cloud, key management becomes a high-risk control. Policies, procedures, and controls around key management need to be well-thought-out.

Fear not, it’s not all bad!
While some challenges may be present as outlined above, moving to the cloud is most often a great move for an organization. Improved security, improved performance, and cost savings are only a few benefits of a cloud migration. Multiple frameworks exist to provide a secure path to cloud adoption, so organizations are not approaching this “blind.” A cloud security framework can guide you through the process of secure adoption and also provide assurance over cloud adoptions you have already performed. We are helping clients in all industries with these cloud migrations/adoptions and have some great perspective on dos, don’ts, and best practices.

Editor’s note: For more cloud-related insights, download ISACA’s complimentary new white paper, Continuous Oversight in the Cloud.

Is Your GRC Program Ready to Thrive in the Digital Economy?

Sudhakar SathiyamurthyDigital technologies have profoundly changed our lives, blurring the lines between the digital and physical worlds. From its humble beginnings, the current constellation of tools and technologies that empower organizations has grown smarter. While digitalization makes businesses intelligent and offers immense value, it also opens up a diverse range of risks. Organizations often face challenges in effectively sensing and managing digital risks and in demonstrating reasonable compliance.

The impediments inhibiting effective GRC often get reflected as operational shortcomings, such as inadequate visibility into crown-jewel assets, a siloed view of risks, risk and compliance reports not catering to the right audience, redundant approaches restraining correlation and compounding exposure of risks, poor user experience and overwhelmingly complex GRC automation. With digital transformation going mainstream, organizations that fail to keep pace with relevant GRC strategies are likely to put themselves at a competitive disadvantage.

The following list summarizes the common misconceptions about the role of GRC in the digital ecosystem:

1) Traditional risk and compliance management practices organize operations into chunks of disconnected units, often noted as disparate departments merely administering their own chores to satisfy compliance requirements, with no homogeneity between risk frameworks, risk-scoring techniques, and terminologies, leading to misconceptions and cognitive disparities of GRC. The silo model also results in wasted resources and inefficiencies due to isolated approaches. Organizations should focus on bolstering effectiveness of GRC by breaking down silos and setting common or comparable frameworks and definitions.

2) With digitalization, businesses end up processing heaps of data of all forms, ranging from users' searches, clicks, website visits, likes, daily habits, online purchases and much more, to achieve their competitive edge. With data being the juice of digitalization, this also puts the organization on a path toward malicious attacks and information thefts. Given the fast pace of digital business and the burgeoning data underpinning the processes, GRC cannot work as a separate competence outside the digital processes – instead GRC should be integrated into design of digital transformation.

3) Digitalization is making inroads with novel delivery methods, and the supply chain is too big to ignore. The burgeoning growth of third-party relationships demands credible and timely insights of the risk and compliance posture underpinning supply chain entities. Remember, your organization is only as strong as its chain of suppliers, and any weak link in the chain is an opportunity for perpetrators to intrude. GRC cannot make the cut with a checklist focus.

4) GRC should communicate in the language of the audiences to demonstrate its value. How many times have we seen a risk assessment conducted at a theoretical level, highlighting the issues that management is already aware of; or a frontline questioning the context of the requirement in the controls framework and how it applies to his jurisdiction of support; or failing to keep the board’s attention due to technically overloaded risk presentations? It all sums up into a simple yet most complex expectation of “communication.” GRC should tailor its language to its audiences to advance user experience and to demonstrate value to business.

5) As speed and agility are the key influencers of success in the digitalization journey, administering GRC in spreadsheets and shared drives results in clear diminishing value for organizations. At the same time, automation is not the ultimate fix; the use of silo technologies without sufficient collaboration is far more upsetting than manual paperwork. Remember, the goal of GRC solutions is to deliver business value by providing accurate, credible and timely intelligence of risk and compliance, rather than getting tangled in solution warfare.

Digitalization is spreading its tentacles across organization. Though organizations are challenged to find new avenues of bulletproofing GRC, successful risk practitioners are staying ahead of the game by focusing on business value creation.

Editor’s note: Sathiyamurthy will provide more insights on this topic in his “Bulletproof your Governance, Risk and Compliance program - GRC by Design” session at ISACA’s 2019 North America CACS conference, to take place 13-15 May in Anaheim, California, USA.

Author’s note: Sudhakar Sathiyamurthy, CISA, CRISC, CGEIT, CIPP, ITIL (Expert) is an experienced executive and director with Grant Thornton’s Risk Advisory Services with a broad range of international experience in building and transforming outcome driven risk advisory services and solutions. His experience has been shaped by helping clients to model and implement strategies to achieve a risk intelligent posture. Sathiyamurthy has led various large-scale programs helping clients stand-up and scale risk capabilities. He has led and contributed to various publications, authored editorials for leading journals and frequently speaks on international forums. He can be contacted at sudsathiyam@gmail.com.

Big Data: Too Valuable and Too Challenging to Be Overlooked

Chris K. DimitriadisAs the new year begins and business leaders refine their 2019 plans, how to effectively deploy technology increasingly will be a focal point of conversations in the boardroom and elsewhere throughout the enterprise. While trending technologies such as artificial intelligence, blockchain and 5G wireless networks command much of the mindshare in the new year, one technology that might no longer be deemed buzzworthy should nonetheless be a major consideration in 2019 for the C-suite and security teams alike – how to derive value while mitigating risk from big data.

The term “big data” has been in circulation for many years, but big data continues to evolve in scope and capability, especially with AI, augmented analytics and other emerging technologies enabling data to be harnessed in more sophisticated fashion. ISACA’s 2018 Digital Transformation Barometer shows that big data remains the technology most capable of delivering organizations transformative value, and it is easy to see why. The positive potential of big data is enormous, spanning virtually all industries and impacting both the public and private sectors. Of critical importance, organizations can tap into big data sets to better understand their customers and configure predictive models that allow them to be more strategic and proactive in their business planning. While the benefits for private-sector enterprises are immense, there is perhaps even more upside for society, generally. For example, big data can be used to accelerate the progress made in scientific research, improve patient outcomes in healthcare by revealing more nuanced treatment patterns and aid in the modernization of urban centers by allowing cities to more effectively govern traffic flow and the deployment of city resources. In the context of these and other high-impact innovations that are in progress, the Internal Data Corporation (IDC) made the whopping projection that worldwide revenues for big data and business analytics will reach $260 billion by 2022.

Despite the considerable enthusiasm for big data-driven projects and use cases, big data also presents a range of evolving challenges from a security and privacy standpoint. All emerging technologies introduce new threats, and the same holds true for big data. While many of the fundamentals of network security apply to big data, there are some distinct considerations when it comes to securing big data. Enterprises often turn to NoSQL databases, which allow for more scalability than conventional, relational databases, to store big data, introducing new cost and security challenges. Additionally, traditional controls such as encryption may introduce bottlenecks due to the size of the data, meaning practitioners need to become more creative in protecting big data. Data anonymization, which allows organizations to protect the privacy of individuals within a data set, is typically an effective approach, and can be especially useful when enterprises are working with third-party vendors. Further, security frameworks, particularly those that align with pertinent standards and regulations, can be utilized during big data implementation projects in order to incorporate all appropriate controls by design. These frameworks also help organizations avoid taking shortcuts in their data governance that could open the door to a large-scale breach or, on a less dramatic but still significant note, identify inefficient practices that do little to help organizations extract value from their data.

Whatever approaches are taken, enterprise leaders need to be every bit as committed to safeguarding big data as they are to the data’s collection and utilization. Without a doubt, big data presents an attractive target to attackers since big data is highly valued – after all, the bigger the data, the bigger the breach. Several attack types exist, potentially impacting both the confidentiality and the integrity of the data, meaning security practitioners must possess an overarching understanding of the threats that could impact the data. This challenge becomes all the more difficult considering the wide variety of sources and data types that encompass big data.

Although the security risks that that accompany big data can be daunting, addressing these concerns head-on is the only viable option, as big data becomes an increasingly valuable asset for enterprises to harness. Not only does the proliferation of data in the digital transformation era create new security risks, but the complexity of storing and managing the data can contribute to lower employee morale and higher turnover among IT professionals, according to a Vanson Bourne study. All of these factors call upon organizations to develop a cohesive, holistic strategy for big data, with extensive collaboration between the C-suite and enterprise security leaders. We have seen the hype around many emerging technologies ebb and flow in recent years, but the need to effectively handle big data has become a fixture on the enterprise landscape that will require ongoing attention and investment in the new year, and beyond.

Editor’s note: This post originally published in CSO.

The Business Risks Behind Slow-Running Tech

Anna JohannsonEntrepreneurs and IT leaders frequently underestimate the true power that slow technology has to negatively impact a business. It’s tempting to wait as long as possible to upgrade or replace your team’s devices; after all, every additional month you get out of a device results in measurable cost savings for the business. But all those slow, aging devices are probably interfering with your business more than you realize.

The roots of slow technology
Slow technology comes in many forms, but always has the same characteristics in common. Processing becomes slower, making it harder for employees to complete their tasks in a timely manner, and occasionally stalls productivity altogether (like when those devices crash).

Generally speaking, there are three main influencing factors that can negatively impact a device’s speed:

  • Age. First and most notably, devices tend to slow down as they get older. Their processors don’t work as efficiently, and disk fragmentation can interfere with how the device functions. On top of that, new programs tend to be designed for faster, more up-to-date machines, which means older computers can’t run them as intended—resulting in a kind of illusionary slowdown.
  • Malware. A sudden or inexplicable slowdown may be the result of malware infecting the device. In some cases, this is an easy problem to fix; a quick cleanup can instantly restore the device to full working order. In other cases, more intensive troubleshooting may be required, or the device might need to be wiped clean.
  • Improper use. Machines can also suffer tremendous slowdown if they aren’t being used responsibly. For example, if an employee spends lots of time downloading files, but never deletes those files, or if they have tons of installed programs that they never use, the computer won’t work as efficiently as it could. Employees may also misreport slow devices; if they have 39 tabs open in a web browser and one of them won’t load as quickly as they would like, the problem probably isn’t with the device itself.

The effects of slow tech
As for how that speed affects productivity, there are several areas of impact to consider:

  • Actions and tasks per day (or per hour). This is the most impactful effect, and the most obvious one. If employees face even a slight delay when attempting to interact with in-app elements, or when performing their most basic tasks, those small pieces of interference can quickly add up to compromise many hours of productivity. Depending on the severity of the problem, a slow device can cost you upwards of an hour per day, per employee.
  • Availability of new programs. Dealing with a slow device can also affect which types of programs an employee is able to run. If they feel their device is old, they may be less willing to update their existing programs (which ultimately yields a security risk). They may also intentionally avoid downloading and using new programs that would otherwise facilitate greater productivity, or new responsibilities.
  • Employee morale. Of course, being forced to tolerate a slow device can also result in decreased employee morale. Over time, your employees will grow more frustrated, aware that they aren’t working to their full potential, and that frustration will result in many hours of lost work (not to mention higher absenteeism).

Fixing the problem
So, what can you do to fix the problem?

  • Clean up any malware. First, investigate any slow devices to see what the real root of the problem is. If there are any instances of malware, make sure to remove them, and test the device again. While you’re at it, make sure your proactive defenses (such as firewalls and antivirus software) are working effectively.
  • Instruct employees on proper use. Host a seminar or send out a memo that instructs employees how to properly care for their devices, especially if they’re allowed to take those devices home as if they were personal belongings. Give them tips for how to keep their devices functioning optimally, and how to temporarily boost speed for intensive applications.
  • Invest in new upgrades. If you’re still dealing with old tech, make an effort to upgrade it. Sometimes, you can get by with a RAM upgrade. Other times, you may need to replace the device entirely. But remember—this is a long-term investment in your team’s productivity.

Correcting, upgrading, or replacing your slow technology can be both costly and time-consuming, but it’s almost always worth the effort. Not only will your team be able to utilize more resources and work faster, they’ll be happier—and that morale will almost certainly have a positive impact on your business’s profitability. Stay proactive, and take action on slow devices before they have a chance to interfere with your work.

All Talk, Little Action: AI and Digital Ethics in People Technology

Bhumika ZhaveriAs we continue the end-of-the year review on all things tech, digital ethics and the progress of artificial intelligence (AI) in people-related technologies springs to mind. People tech affects HR, recruitment and other areas that enable businesses to hire, manage and plan their key asset – people. With new suppliers coming out consistently, it is very difficult for businesses to understand which technology is ethical with regard to data, code and algorithms, versus technology that is not.

The first thing to highlight is that AI is a huge buzzword for people tech these days. However, it is abused more often than it should be, resulting in confusion for businesses that simply may not have the time to keep on top of tech or research it before buying, typically costing them huge resources. To clarify, AI has several strands, two of which are machine learning and automation. These two are significantly highest in use at the moment in people tech, whereas other forms of AI are more relevant in other sectors. As an example, autonomous cars use robotics and other relevant strands of AI.

Now, regardless of the use of AI and its specific strand, especially when it concerns algorithm-building stages, it is extremely important for every developer and tech business to not only think about “ethics” and “biases,” but to actually implement practices that would help them not only tackle their own challenges with regards to ethics and biases, but also those of their employees and users. This truly allows them to build and code for purpose-driven, value-add commercial products. Increasingly, a lot of experts are talking about this issue, from TechUK committees that I participate in, to IEEE guidelines I am part of globally. There are a lot of experts, individuals and organizations constantly talking about this important topic.

However, very little has been seen in terms of action, and so, for my part, I am “practicing what I preach.” While we are a startup, and it does add a couple hours to my time reviewing the code for new features, it is very satisfying to know that this work comes from a place of supporting users. In addition, we prioritize careful data use and management; we will strictly only use the data that helps our users with analytics (based on what our platform offers) and provides a better experience.

How can larger tech companies and software houses implement this? I believe that the larger the business, the easier it should be to have processes and resources that effectively address the desired outputs of the business vision and support customers, while also to serve as an in-house ethics and bias reviewer. This gives businesses a lot of power internally to follow guidelines drawn by governments and other organizations working actively to support this framework-building.

There is no doubt that 2019 will be a key year for growth in digitization, automation, augmented analytics and blockchain. So, I really hope that businesses stop talking about the fundamental challenges of digital and AI ethics, and start building tools and frameworks to monitor them.

About the author: Bhumika Zhaveri is a non-conventional and solutions-driven technology entrepreneur and businesswoman. As an experienced HR Technologist, she has expertise in HR and Recruitment: Technology & Programme Management for Change & Transformation. Privileged to look at challenges differently than most due to versatile life, personal and professional experiences, she is actively involved with TechUK, IEEE for data ethics, AI & digital committees and TechSheCan charter with PWC, Girls Who Code and similar organizations supporting women in stem . Currently, she is also the Tech Advisor for Resume Foundation and Bridge of Hope, while also being a founding member of Digital Anthropology.

What is Driving Growth for AR/VR?

Kris KoloGartner’s recent list of top tech trends for 2019 included immersive experiences, which they described as follows:

“Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience."

Below, I explore some of the anticipated themes related to VR/AR that will play a role in the coming year and beyond:

• Global AR & VR product revenues are expected to grow from US $3.8 billion in 2017 to US $56.4 billion in 2022, a 71 percent compound annual growth rate. This includes enterprise and consumer segments (ARtillry Intelligence).

  • In VR, consumer revenue will eclipse enterprise revenue by a 3:1 ratio in 2022. Standalone VR like Oculus Go will accelerate consumer adoption.
  • Head-worn AR will find a home with consumers. However, its specs and stylistic realities inhibit several consumer use cases in the near term. Apple’s potential 2021-2022 introduction of smart glasses will shift AR’s momentum and revenue share toward consumer spending.
  • By 2022, enterprise AR’s revenue dominance over consumer AR will decelerate as smart glasses begin to penetrate consumer markets. Until then, mobile will dominate consumer AR, with most revenue derived from software as opposed to hardware (smartphone sales aren't counted).

• The patterns of investment and development in the different sectors in which VR/AR are applicable – or potentially applicable –  show the increasing applicability of this technology beyond the games and entertainment fields that saw its birth in the 1990s; 38 percent of respondents, for example, believe VR growth in the enterprise sector has been “strong” or “very strong” for example, with an equivalent figure of 43 percent for AR (The XR Industry Survey 2018).

  • Education is the enterprise sector that has been prioritizing VR/AR the most, and is the most competitive, despite the fact that it traditionally has had much less spending power than industry. Of respondents who reported that they are already using XR technologies, 23 percent were in the education sector.
  • Architecture/engineering/construction was a close second at 18 percent. Healthcare is quite low on the list despite the obvious VR/AR potential in diagnosis and therapy, with just 7 percent of those using this technology coming from the healthcare sector.
  • Industry expectations are that AR will blossom in the mainstream before VR does, in part because of the availability of open content development platforms like ARCore and ARKit, which have no VR counterparts.
  • Many industries see benefits in the long term from combining VR and AR. VR’s superior ability to create a fully immersive environment currently gives it the edge in training and educational applications.
  • Sixty-two percent of service organizations say that AR is providing measurable value for service in the following ways: better knowledge transfer among employees, increased employee efficiency onsite, improved first-time fix rates, and fewer truck rolls (IDC / PTC).
Building Cyber Resilience Through a Risk-Based Approach

E. Doug Grindstaff For many organizations to have an effective cyber culture, they must also have a mature cyber culture. A recent cybersecurity culture study conducted by ISACA and CMMI Institute found that only 5 percent of organizations believe no gap exists between their current and desired cybersecurity culture. A full third see a significant gap. That’s why I found it so valuable to sit down with cybersecurity leaders across the public, private and non-profit sectors to have a discussion in the UK last week about cyber maturity, what it means to people and how we can help organizations value being more prepared.

The general consensus at our session, “The Future of Cyber Maturity and Benchmarking,” was that our work must start at the top with the board. We must be speaking in terms the boards will understand and getting boards to value cybersecurity as a business enterprise risk issue that must be managed as such. This hasn’t happened yet to the degree it needs to. The cybersecurity culture study confirms this feedback in that 58 percent of respondents cited a corresponding lack of a clear management plan or KPIs.

Another key word involved in maturity is resilience. No organization is ever completely bulletproof from an attack. The idea is to train and plan thoroughly, ensure that the organization as a whole is as prepared as possible, and if/when an attack happens, is in a position to respond to the attack efficiently and effectively. That’s a resilient organization and the best we can ask for when it comes to cyber crime.

As organizations become more resilient, they must honor the need to effectively manage risk. The risk equation includes workforce readiness, security operations and capability maturity. Your workforce must be thoroughly trained to understand the risk at all levels.

The group was heavily focused on moving away from the old way of managing risk. Risk is not managing compliance or a checklist. It is truly about building resilience through a risk-based approach.

A quality maturity model looks at people, processes and technology, and takes all these elements into consideration. However, the discussion was largely around the workforce readiness and how to motivate people to do what needs to be done. Asking the right questions as technology leaders is a start. Are we doing the right things? Are we doing them well? How can we ensure the board is informed and engaged, and that we are focused on areas of greatest risk?

As technology leaders and assurance professionals, we discussed the need to be ahead of the curve, implementing cybersecurity as a business imperative, rather than waiting for an accident and reacting at that time. An organization must know its risk appetite and its risk posture.

All of this counsel goes for organizations of any size and at all places within the organization. We discussed the importance of supply chains, micro businesses and small and medium enterprises (SMEs) having special considerations as they build capabilities. SMEs do often have a much smaller staff to work with, but the responsibility to manage the risk remains the same, thus making a focused and strategic approach all the more important.

A mature organization is one that has truly examined its risk and understands it from the top down, with buy-in to protect the organization from each and every employee. I look forward to continuing this important discussion.

Data Security and Access to Voters’ Personal Data by Political Parties: An EU Case Study

Laszlo DelleiEditor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective and UK perspective on the topic. Today, we publish a post from Laszlo Dellei, providing an EU perspective.

Brexit and the 2016 US presidential election showed that microtargeting voters to deliver them certain political messages may gradually alter voters’ decisions. While less publicized, concerns related to election data integrity also exist throughout the EU. The European Parliament has conducted several public hearings on this topic and the Commission is supporting Member States to secure their local and national elections, as well as their citizens’ participation in EU elections.

The Commission recently published a communication on free and fair European elections, which outlines all the efforts made by the institutions to make sure that the upcoming EU elections in 2019 will be held democratically. The EU’s strategy is to combine data protection, cybersecurity, cooperation, transparency, and appropriate sanctions.

For instance, the Commission proposes introducing financial penalties of 5 percent of the annual budget of the European party or political foundation concerned if they infringe the data protection rules in an attempt to influence the outcome of elections to the European Parliament.

Another key aspect of this strategy is the implementation of General Data Protection Regulation (GDPR) equipped to help prevent and address unlawful use of personal data. Therefore, the Commission prepared specific guidance to highlight the data protection obligations of relevance in the electoral context.

In parallel, the Commission published recommendations to enhance the efficient conduct of the 2019 EU elections. Key points are as follows:

  • The EU encourages Member States to establish and support a national elections network to ensure cooperation in connected fields (such as data protection authorities, media regulators, cybersecurity authorities, law enforcement etc.).
  • It is also recommended to encourage and facilitate the transparency of paid online political advertisements and communications.
  • Member States should also take appropriate and proportionate technical and organizational measures to manage the risks posed to the security of network and information systems used for the organization of elections.
  • Member States are encouraged to set up awareness-raising activities aimed at increasing the transparency of elections and building trust in the electoral processes.

Sources of voter data in Hungary
In my country, Hungary, the relevant regulations and practices may reveal certain risks and problems in this respect. Current rules providing protection of voters’ personal data, especially provisions governing integrity and security of such information, will be revised.

During microtargeting, information may be used to deliver political messages to the recipients. In addition to the name and political preferences of the data subject, the processing of physical or email addresses and mobile phone numbers are necessary for the intended targeting. In this regard, Hungarian legislation provides several opportunities for the political parties to access voters’ personal data.

Among the legal sources, information provided to the parties by the election offices is of paramount importance. Candidates and nominating organizations (mostly political parties) may request the names and addresses of voters in the voter register from the relevant electoral office for campaign purposes. The information may be provided by age, gender, or address of the data subjects. Although these data do not contain information on the voters’ political opinion or party affiliation, the data may be used to obtain additional information for the purposes of microtargeting.

Secondly, political parties usually communicate with their supporters via various methods including physical or email addresses, land or mobile phone numbers, etc. The sources of this information may vary. It may be collected from the data subject at a campaign rally or other events organized by the party. Supporters may provide the party with their contact details when – for instance – they sign an initiative for a referendum, or when they support another political action with their signature. During the elections, political parties may also use this data for campaign purposes.

The main risk concerning the processing of personal data of voters by political parties arises from the lack of comprehensive legislation and effective supervision. The current regulation concerning electoral procedure predates the GDPR and the 2016 events (Brexit and the election in the US). Furthermore, there is no specific legislation concerning political campaign activities; only the provisions of the Privacy Act of 2011 had previously been applied. Therefore, the relevant laws do not focus on the possibility of microtargeting and thus the importance of integrity and safety of voters’ personal data.

Conclusions
Given the global events of recent years, the focus on the integrity and security of voters’ personal data will be a priority from a legislative standpoint as well as from the point-of-view of the relevant actors in the EU and around the world. The lack of regulation and effective supervision in this regard may lead to serious consequences that could harm democracy and erode society’s trust in its institutions.

Although the GDPR and the Privacy Act provide for a wider protection for data subjects, and thus for voters, it is necessary to adopt such regulations that define certain technological requirements and other safeguards to prevent misuse and to provide integrity of voters’ data.

Author’s note: Laszlo Dellei is an experienced, certified and internationally recognized InfoSec, Cybersecurity, Security, Privacy and ITSM professional, with a multidisciplinary background. Laszlo received his B.S. degree in Information Technology from the Dennis Gabor College and the MBA in Information Management specialized in Security from the Metropolitan University. Furthermore, Laszlo proudly holds, among others, the following internationally recognized credentials: C|CISO, CISA, CGEIT, CRISC, ITIL and ISO27001. Laszlo is dealing with the referred disciplines for almost 15 years. As the CEO of Kerubiel Kft, besides management tasks, he also is responsible for high‐priority operations in the following domains: Physical Security, Environmental Security, Cyber and Information Security. Laszlo also is a registered and active security expert of the European Commission. Furthermore, he is a member of the Hungarian Chamber of Judicial Experts, Gold Member of ISACA, member of the EC‐ Council, and member of John von Neumann Computer Society.

Transparent Use of Personal Data Critical to Election Integrity in UK

Mike HughesEditor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective on the topic. Today, we publish a post from Mike Hughes, providing a UK perspective.

In some ways, the UK has less to worry about when it comes to protecting the integrity of election data and outcomes than some of its international counterparts. The UK election process is well-established and proven over may years (well centuries), and therefore UK elections are generally conducted in a very basic manner. Before an election, voters receive a poll card indicating the location where they should go to vote. On polling day, voters enter the location, provide their name and address, and are presenting with a voting slip. They take this slip, enter the voting booth, pick up a pencil and put a cross in the box next to their candidate of choice. Voters then deposit this paper slip in an opaque box to be counted once polls are closed in the evening.

Pretty simple (and old-fashioned). Yet, despite the UK’s relatively straightforward election procedures, the Political Studies Association reported in 2016 that the UK rated poorly in election integrity relative to several other established democracies in Western Europe and beyond. More recently, there are strong suspicions that social media has been used to spread false information to manipulate political opinion and, therefore, election results. Consider that one of the biggest examples is the Cambridge Analytica data misuse scandal that has roiled both sides of the Atlantic, and it is fair to say that the matter of election integrity has only become more of a top-of-mind concern in the UK since that 2016 report, especially during the campaigning phase.

Rightfully so, steps are being taken to provide the public greater peace of mind that campaigns and elections are being conducted fairly. In 2017, the Information Commissioner launched a formal inquiry into political parties’ use of data analytics to target voters amid concerns that Britons’ privacy was being jeopardized by new campaign tactics. The inquiry has since broadened and become the largest investigation of its type by any Data Protection Authority, involving social media online platforms, data brokers, analytics firms, academic institutions, political parties and campaign groups. A key strand of the investigation centers on the link between Cambridge Analytica, its parent company, SCL Elections Limited, and Aggregate IQ, and involves allegations that data, obtained from Facebook, may have been misused by both sides in the UK referendum on membership of the EU, as well as to target voters during the 2016 United States presidential election process.

The investigation remains ongoing, but the Information Commissioner needed to meet her commitment to provide Parliament’s Digital Culture Media and Sport Select Committee with an update on the investigation for the purposes of informing their work on the “Fake News” inquiry before the summer recess. A separate report, “Democracy Disrupted? Personal Information and Political Influence”, has been published, covering the policy recommendations from the investigation. This includes an emphasis on the need for political campaigns to use personal data lawfully and transparently.

Social media powers also should draw upon their considerable resources to become part of the solution. Facebook, Google and Twitter have indicated they will ensure that campaigns that pay to place political adverts with them will have to include labels showing who has paid for them. They also say that they plan to publish their own online databases of the political adverts that they have been paid to run. These will include information such as the targeting, actual reach and amount spent on those adverts. These social media giants are aiming to publish their databases in time for the November 2018 mid-term elections in the US, and Facebook has said it aims to publish similar data ahead of the local elections in England and Northern Ireland in May 2019.

All of these considerations are unfolding in an era when the General Data Protection Regulation has trained a bright spotlight on how enterprises are leveraging personal data. As a society, we have come to understand that while the big data era presents many unprecedented opportunities for individuals and organizations, the related privacy, security and ethical implications must be kept at the forefront of our policies and procedures.

As I stated at the start of this article, the UK’s election system is a well-proven, paper-based process that has changed very little over many, many years. One thing is certain: sometime in the not-too-distant future, our paper-based system will disappear and be replaced by a digital system. There will then be a need for a highly trusted digital solution that provides a high level of confidence that the system cannot be tampered with or manipulated. These systems aren’t there yet, but technologies such as blockchain may be the start of the answer. Technology-driven capabilities will continue to evolve, but our commitment to integrity at the polls must remain steadfast.

Concerted Effort Needed to Assure Data Integrity in Electoral Process

Rob ClydeEditor’s note: A recent ISACA survey found that 85 percent of technology professionals worldwide (and 86 percent in the US) are concerned about the ability of the public sector to conduct secure, reliable and accurate elections. ISACA board chair Rob Clyde explores the topic of election data integrity in more detail below.

The motivations of cybercriminals are as diverse as their forms of attacks. Many cybercriminals are after money, naturally, but plenty of other incentives exist, including the allure of exerting power and influence. Unfortunately, one of the most impactful ways to do so involves tampering with the integrity of elections, a rising concern in the United States and around the world.

While election security is not a new topic, it took on increased prominence in the US in the aftermath of the 2016 presidential election and has prominently surfaced again in the build-up to November’s midterm elections. Although allegations of nation-state interference in the US election process has commanded much of the media attention, protecting the overall data integrity of elections is a much more encompassing issue than any attempt by a nation-state to influence a particular election cycle or campaign. Working to enhance the reliability of the information systems and technology that assures data integrity in the electoral process will be an ongoing challenge requiring bipartisan attention and support from leaders at all levels of the government.

Encouragingly, this challenge is clearly on the radar of US elected officials, with a bill to establish the National Commission on the Cybersecurity of United States Election Systems and the Secure Elections Act among the efforts to drive toward solutions. A recently formed Task Force on Election Security, composed of members of the Homeland Security Committee and House Administration Committee, allowed for members from both committees to interact with election stakeholders, as well as cybersecurity and election infrastructure experts, to analyze the effectiveness of the US election system. The task force produced a final report and future recommendations, with the goal of maintaining free, fair and secure elections.

While the attention on this topic in Washington, D.C., is an important starting point, there must be extensive collaboration between federal agencies and the state officials who are charged with direct oversight of elections. Many state officials face the massive undertaking of securing elections with small IT staffs and few cybersecurity professionals on their teams. Given the high stakes involved and the growing complexities of the threat landscape, election systems require more dedicated resources to ensure the appropriate people, processes and technology are in place to stave off threats to election data integrity, whether intentional or otherwise. The federal government must provide the funding so that states are able to update vulnerable voting machines and modernize their IT infrastructures. Federal funding allowing for the training of election officials and poll workers about cyber risks would be another worthwhile investment. Further, since elections are generally run at the state level, states and federal agencies need to increase coordination to allow for real-time notifications of security breaches and threats. This could also present an opportunity for the government to tap into the capabilities of the private sector to strengthen election security.

Additionally, as the task force recommended, states should conduct post-election audits in order to ensure the election was not compromised, as well as identify and limit future risks. The implementation of post-election audits is an immediate step the government can take to limit future vulnerabilities while also strengthening public trust in the process – an important consideration that should not be overlooked.

One intriguing longer-term solution for election data integrity is the deployment of blockchain technology. Blockchain is now being embraced by many different sectors and agencies, and was recently used in West Virginia for absentee voting leading up to the midterms. Blockchain has the ability to secure a permanent record that is timestamped and signed, and can therefore not be altered in any way. Developing this cyberattack-resilient database could prove to be a critical step toward mitigating any potential manipulation or voting fraud.

While audit, governance, risk and information/cyber security professionals are charged with many important responsibilities, helping to solidify the data integrity of elections is among the most vital. In the US and around the world, fair and trustworthy elections are an indispensable component of free societies. Losing trust in the outcomes of elections would lead to a level of discord that would have a profoundly destabilizing impact. The events of the past few years have reinforced that protecting the integrity of the electoral system in this new era will require a significant investment in attention and resources. So be it. The alternative, taking our election security for granted, no longer is a viable path.

1 - 10 Next