As the new year begins and business leaders refine their 2019 plans, how to effectively deploy technology increasingly will be a focal point of conversations in the boardroom and elsewhere throughout the enterprise. While trending technologies such as artificial intelligence, blockchain and 5G wireless networks command much of the mindshare in the new year, one technology that might no longer be deemed buzzworthy should nonetheless be a major consideration in 2019 for the C-suite and security teams alike – how to derive value while mitigating risk from big data.
The term “big data” has been in circulation for many years, but big data continues to evolve in scope and capability, especially with AI, augmented analytics and other emerging technologies enabling data to be harnessed in more sophisticated fashion. ISACA’s 2018 Digital Transformation Barometer shows that big data remains the technology most capable of delivering organizations transformative value, and it is easy to see why. The positive potential of big data is enormous, spanning virtually all industries and impacting both the public and private sectors. Of critical importance, organizations can tap into big data sets to better understand their customers and configure predictive models that allow them to be more strategic and proactive in their business planning. While the benefits for private-sector enterprises are immense, there is perhaps even more upside for society, generally. For example, big data can be used to accelerate the progress made in scientific research, improve patient outcomes in healthcare by revealing more nuanced treatment patterns and aid in the modernization of urban centers by allowing cities to more effectively govern traffic flow and the deployment of city resources. In the context of these and other high-impact innovations that are in progress, the Internal Data Corporation (IDC) made the whopping projection that worldwide revenues for big data and business analytics will reach $260 billion by 2022.
Despite the considerable enthusiasm for big data-driven projects and use cases, big data also presents a range of evolving challenges from a security and privacy standpoint. All emerging technologies introduce new threats, and the same holds true for big data. While many of the fundamentals of network security apply to big data, there are some distinct considerations when it comes to securing big data. Enterprises often turn to NoSQL databases, which allow for more scalability than conventional, relational databases, to store big data, introducing new cost and security challenges. Additionally, traditional controls such as encryption may introduce bottlenecks due to the size of the data, meaning practitioners need to become more creative in protecting big data. Data anonymization, which allows organizations to protect the privacy of individuals within a data set, is typically an effective approach, and can be especially useful when enterprises are working with third-party vendors. Further, security frameworks, particularly those that align with pertinent standards and regulations, can be utilized during big data implementation projects in order to incorporate all appropriate controls by design. These frameworks also help organizations avoid taking shortcuts in their data governance that could open the door to a large-scale breach or, on a less dramatic but still significant note, identify inefficient practices that do little to help organizations extract value from their data.
Whatever approaches are taken, enterprise leaders need to be every bit as committed to safeguarding big data as they are to the data’s collection and utilization. Without a doubt, big data presents an attractive target to attackers since big data is highly valued – after all, the bigger the data, the bigger the breach. Several attack types exist, potentially impacting both the confidentiality and the integrity of the data, meaning security practitioners must possess an overarching understanding of the threats that could impact the data. This challenge becomes all the more difficult considering the wide variety of sources and data types that encompass big data.
Although the security risks that that accompany big data can be daunting, addressing these concerns head-on is the only viable option, as big data becomes an increasingly valuable asset for enterprises to harness. Not only does the proliferation of data in the digital transformation era create new security risks, but the complexity of storing and managing the data can contribute to lower employee morale and higher turnover among IT professionals, according to a Vanson Bourne study. All of these factors call upon organizations to develop a cohesive, holistic strategy for big data, with extensive collaboration between the C-suite and enterprise security leaders. We have seen the hype around many emerging technologies ebb and flow in recent years, but the need to effectively handle big data has become a fixture on the enterprise landscape that will require ongoing attention and investment in the new year, and beyond.
Editor’s note: This post originally published in CSO.
Entrepreneurs and IT leaders frequently underestimate the true power that slow technology has to negatively impact a business. It’s tempting to wait as long as possible to upgrade or replace your team’s devices; after all, every additional month you get out of a device results in measurable cost savings for the business. But all those slow, aging devices are probably interfering with your business more than you realize.
The roots of slow technology
Slow technology comes in many forms, but always has the same characteristics in common. Processing becomes slower, making it harder for employees to complete their tasks in a timely manner, and occasionally stalls productivity altogether (like when those devices crash).
Generally speaking, there are three main influencing factors that can negatively impact a device’s speed:
- Age. First and most notably, devices tend to slow down as they get older. Their processors don’t work as efficiently, and disk fragmentation can interfere with how the device functions. On top of that, new programs tend to be designed for faster, more up-to-date machines, which means older computers can’t run them as intended—resulting in a kind of illusionary slowdown.
- Malware. A sudden or inexplicable slowdown may be the result of malware infecting the device. In some cases, this is an easy problem to fix; a quick cleanup can instantly restore the device to full working order. In other cases, more intensive troubleshooting may be required, or the device might need to be wiped clean.
- Improper use. Machines can also suffer tremendous slowdown if they aren’t being used responsibly. For example, if an employee spends lots of time downloading files, but never deletes those files, or if they have tons of installed programs that they never use, the computer won’t work as efficiently as it could. Employees may also misreport slow devices; if they have 39 tabs open in a web browser and one of them won’t load as quickly as they would like, the problem probably isn’t with the device itself.
The effects of slow tech
As for how that speed affects productivity, there are several areas of impact to consider:
- Actions and tasks per day (or per hour). This is the most impactful effect, and the most obvious one. If employees face even a slight delay when attempting to interact with in-app elements, or when performing their most basic tasks, those small pieces of interference can quickly add up to compromise many hours of productivity. Depending on the severity of the problem, a slow device can cost you upwards of an hour per day, per employee.
- Availability of new programs. Dealing with a slow device can also affect which types of programs an employee is able to run. If they feel their device is old, they may be less willing to update their existing programs (which ultimately yields a security risk). They may also intentionally avoid downloading and using new programs that would otherwise facilitate greater productivity, or new responsibilities.
- Employee morale. Of course, being forced to tolerate a slow device can also result in decreased employee morale. Over time, your employees will grow more frustrated, aware that they aren’t working to their full potential, and that frustration will result in many hours of lost work (not to mention higher absenteeism).
Fixing the problem
So, what can you do to fix the problem?
- Clean up any malware. First, investigate any slow devices to see what the real root of the problem is. If there are any instances of malware, make sure to remove them, and test the device again. While you’re at it, make sure your proactive defenses (such as firewalls and antivirus software) are working effectively.
- Instruct employees on proper use. Host a seminar or send out a memo that instructs employees how to properly care for their devices, especially if they’re allowed to take those devices home as if they were personal belongings. Give them tips for how to keep their devices functioning optimally, and how to temporarily boost speed for intensive applications.
- Invest in new upgrades. If you’re still dealing with old tech, make an effort to upgrade it. Sometimes, you can get by with a RAM upgrade. Other times, you may need to replace the device entirely. But remember—this is a long-term investment in your team’s productivity.
Correcting, upgrading, or replacing your slow technology can be both costly and time-consuming, but it’s almost always worth the effort. Not only will your team be able to utilize more resources and work faster, they’ll be happier—and that morale will almost certainly have a positive impact on your business’s profitability. Stay proactive, and take action on slow devices before they have a chance to interfere with your work.
As we continue the end-of-the year review on all things tech, digital ethics and the progress of artificial intelligence (AI) in people-related technologies springs to mind. People tech affects HR, recruitment and other areas that enable businesses to hire, manage and plan their key asset – people. With new suppliers coming out consistently, it is very difficult for businesses to understand which technology is ethical with regard to data, code and algorithms, versus technology that is not.
The first thing to highlight is that AI is a huge buzzword for people tech these days. However, it is abused more often than it should be, resulting in confusion for businesses that simply may not have the time to keep on top of tech or research it before buying, typically costing them huge resources. To clarify, AI has several strands, two of which are machine learning and automation. These two are significantly highest in use at the moment in people tech, whereas other forms of AI are more relevant in other sectors. As an example, autonomous cars use robotics and other relevant strands of AI.
Now, regardless of the use of AI and its specific strand, especially when it concerns algorithm-building stages, it is extremely important for every developer and tech business to not only think about “ethics” and “biases,” but to actually implement practices that would help them not only tackle their own challenges with regards to ethics and biases, but also those of their employees and users. This truly allows them to build and code for purpose-driven, value-add commercial products. Increasingly, a lot of experts are talking about this issue, from TechUK committees that I participate in, to IEEE guidelines I am part of globally. There are a lot of experts, individuals and organizations constantly talking about this important topic.
However, very little has been seen in terms of action, and so, for my part, I am “practicing what I preach.” While we are a startup, and it does add a couple hours to my time reviewing the code for new features, it is very satisfying to know that this work comes from a place of supporting users. In addition, we prioritize careful data use and management; we will strictly only use the data that helps our users with analytics (based on what our platform offers) and provides a better experience.
How can larger tech companies and software houses implement this? I believe that the larger the business, the easier it should be to have processes and resources that effectively address the desired outputs of the business vision and support customers, while also to serve as an in-house ethics and bias reviewer. This gives businesses a lot of power internally to follow guidelines drawn by governments and other organizations working actively to support this framework-building.
There is no doubt that 2019 will be a key year for growth in digitization, automation, augmented analytics and blockchain. So, I really hope that businesses stop talking about the fundamental challenges of digital and AI ethics, and start building tools and frameworks to monitor them.
About the author: Bhumika Zhaveri is a non-conventional and solutions-driven technology entrepreneur and businesswoman. As an experienced HR Technologist, she has expertise in HR and Recruitment: Technology & Programme Management for Change & Transformation. Privileged to look at challenges differently than most due to versatile life, personal and professional experiences, she is actively involved with TechUK, IEEE for data ethics, AI & digital committees and TechSheCan charter with PWC, Girls Who Code and similar organizations supporting women in stem . Currently, she is also the Tech Advisor for Resume Foundation and Bridge of Hope, while also being a founding member of Digital Anthropology.
Gartner’s recent list of top tech trends for 2019 included immersive experiences, which they described as follows:
“Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience."
Below, I explore some of the anticipated themes related to VR/AR that will play a role in the coming year and beyond:
• Global AR & VR product revenues are expected to grow from US $3.8 billion in 2017 to US $56.4 billion in 2022, a 71 percent compound annual growth rate. This includes enterprise and consumer segments (ARtillry Intelligence).
- In VR, consumer revenue will eclipse enterprise revenue by a 3:1 ratio in 2022. Standalone VR like Oculus Go will accelerate consumer adoption.
- Head-worn AR will find a home with consumers. However, its specs and stylistic realities inhibit several consumer use cases in the near term. Apple’s potential 2021-2022 introduction of smart glasses will shift AR’s momentum and revenue share toward consumer spending.
- By 2022, enterprise AR’s revenue dominance over consumer AR will decelerate as smart glasses begin to penetrate consumer markets. Until then, mobile will dominate consumer AR, with most revenue derived from software as opposed to hardware (smartphone sales aren't counted).
• The patterns of investment and development in the different sectors in which VR/AR are applicable – or potentially applicable – show the increasing applicability of this technology beyond the games and entertainment fields that saw its birth in the 1990s; 38 percent of respondents, for example, believe VR growth in the enterprise sector has been “strong” or “very strong” for example, with an equivalent figure of 43 percent for AR (The XR Industry Survey 2018).
- Education is the enterprise sector that has been prioritizing VR/AR the most, and is the most competitive, despite the fact that it traditionally has had much less spending power than industry. Of respondents who reported that they are already using XR technologies, 23 percent were in the education sector.
- Architecture/engineering/construction was a close second at 18 percent. Healthcare is quite low on the list despite the obvious VR/AR potential in diagnosis and therapy, with just 7 percent of those using this technology coming from the healthcare sector.
- Industry expectations are that AR will blossom in the mainstream before VR does, in part because of the availability of open content development platforms like ARCore and ARKit, which have no VR counterparts.
- Many industries see benefits in the long term from combining VR and AR. VR’s superior ability to create a fully immersive environment currently gives it the edge in training and educational applications.
- Sixty-two percent of service organizations say that AR is providing measurable value for service in the following ways: better knowledge transfer among employees, increased employee efficiency onsite, improved first-time fix rates, and fewer truck rolls (IDC / PTC).
For many organizations to have an effective cyber culture, they must also have a mature cyber culture. A recent cybersecurity culture study conducted by ISACA and CMMI Institute found that only 5 percent of organizations believe no gap exists between their current and desired cybersecurity culture. A full third see a significant gap. That’s why I found it so valuable to sit down with cybersecurity leaders across the public, private and non-profit sectors to have a discussion in the UK last week about cyber maturity, what it means to people and how we can help organizations value being more prepared.
The general consensus at our session, “The Future of Cyber Maturity and Benchmarking,” was that our work must start at the top with the board. We must be speaking in terms the boards will understand and getting boards to value cybersecurity as a business enterprise risk issue that must be managed as such. This hasn’t happened yet to the degree it needs to. The cybersecurity culture study confirms this feedback in that 58 percent of respondents cited a corresponding lack of a clear management plan or KPIs.
Another key word involved in maturity is resilience. No organization is ever completely bulletproof from an attack. The idea is to train and plan thoroughly, ensure that the organization as a whole is as prepared as possible, and if/when an attack happens, is in a position to respond to the attack efficiently and effectively. That’s a resilient organization and the best we can ask for when it comes to cyber crime.
As organizations become more resilient, they must honor the need to effectively manage risk. The risk equation includes workforce readiness, security operations and capability maturity. Your workforce must be thoroughly trained to understand the risk at all levels.
The group was heavily focused on moving away from the old way of managing risk. Risk is not managing compliance or a checklist. It is truly about building resilience through a risk-based approach.
A quality maturity model looks at people, processes and technology, and takes all these elements into consideration. However, the discussion was largely around the workforce readiness and how to motivate people to do what needs to be done. Asking the right questions as technology leaders is a start. Are we doing the right things? Are we doing them well? How can we ensure the board is informed and engaged, and that we are focused on areas of greatest risk?
As technology leaders and assurance professionals, we discussed the need to be ahead of the curve, implementing cybersecurity as a business imperative, rather than waiting for an accident and reacting at that time. An organization must know its risk appetite and its risk posture.
All of this counsel goes for organizations of any size and at all places within the organization. We discussed the importance of supply chains, micro businesses and small and medium enterprises (SMEs) having special considerations as they build capabilities. SMEs do often have a much smaller staff to work with, but the responsibility to manage the risk remains the same, thus making a focused and strategic approach all the more important.
A mature organization is one that has truly examined its risk and understands it from the top down, with buy-in to protect the organization from each and every employee. I look forward to continuing this important discussion.
Editor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective and UK perspective on the topic. Today, we publish a post from Laszlo Dellei, providing an EU perspective.
Brexit and the 2016 US presidential election showed that microtargeting voters to deliver them certain political messages may gradually alter voters’ decisions. While less publicized, concerns related to election data integrity also exist throughout the EU. The European Parliament has conducted several public hearings on this topic and the Commission is supporting Member States to secure their local and national elections, as well as their citizens’ participation in EU elections.
The Commission recently published a communication on free and fair European elections, which outlines all the efforts made by the institutions to make sure that the upcoming EU elections in 2019 will be held democratically. The EU’s strategy is to combine data protection, cybersecurity, cooperation, transparency, and appropriate sanctions.
For instance, the Commission proposes introducing financial penalties of 5 percent of the annual budget of the European party or political foundation concerned if they infringe the data protection rules in an attempt to influence the outcome of elections to the European Parliament.
Another key aspect of this strategy is the implementation of General Data Protection Regulation (GDPR) equipped to help prevent and address unlawful use of personal data. Therefore, the Commission prepared specific guidance to highlight the data protection obligations of relevance in the electoral context.
In parallel, the Commission published recommendations to enhance the efficient conduct of the 2019 EU elections. Key points are as follows:
- The EU encourages Member States to establish and support a national elections network to ensure cooperation in connected fields (such as data protection authorities, media regulators, cybersecurity authorities, law enforcement etc.).
- It is also recommended to encourage and facilitate the transparency of paid online political advertisements and communications.
- Member States should also take appropriate and proportionate technical and organizational measures to manage the risks posed to the security of network and information systems used for the organization of elections.
- Member States are encouraged to set up awareness-raising activities aimed at increasing the transparency of elections and building trust in the electoral processes.
Sources of voter data in Hungary
In my country, Hungary, the relevant regulations and practices may reveal certain risks and problems in this respect. Current rules providing protection of voters’ personal data, especially provisions governing integrity and security of such information, will be revised.
During microtargeting, information may be used to deliver political messages to the recipients. In addition to the name and political preferences of the data subject, the processing of physical or email addresses and mobile phone numbers are necessary for the intended targeting. In this regard, Hungarian legislation provides several opportunities for the political parties to access voters’ personal data.
Among the legal sources, information provided to the parties by the election offices is of paramount importance. Candidates and nominating organizations (mostly political parties) may request the names and addresses of voters in the voter register from the relevant electoral office for campaign purposes. The information may be provided by age, gender, or address of the data subjects. Although these data do not contain information on the voters’ political opinion or party affiliation, the data may be used to obtain additional information for the purposes of microtargeting.
Secondly, political parties usually communicate with their supporters via various methods including physical or email addresses, land or mobile phone numbers, etc. The sources of this information may vary. It may be collected from the data subject at a campaign rally or other events organized by the party. Supporters may provide the party with their contact details when – for instance – they sign an initiative for a referendum, or when they support another political action with their signature. During the elections, political parties may also use this data for campaign purposes.
The main risk concerning the processing of personal data of voters by political parties arises from the lack of comprehensive legislation and effective supervision. The current regulation concerning electoral procedure predates the GDPR and the 2016 events (Brexit and the election in the US). Furthermore, there is no specific legislation concerning political campaign activities; only the provisions of the Privacy Act of 2011 had previously been applied. Therefore, the relevant laws do not focus on the possibility of microtargeting and thus the importance of integrity and safety of voters’ personal data.
Given the global events of recent years, the focus on the integrity and security of voters’ personal data will be a priority from a legislative standpoint as well as from the point-of-view of the relevant actors in the EU and around the world. The lack of regulation and effective supervision in this regard may lead to serious consequences that could harm democracy and erode society’s trust in its institutions.
Although the GDPR and the Privacy Act provide for a wider protection for data subjects, and thus for voters, it is necessary to adopt such regulations that define certain technological requirements and other safeguards to prevent misuse and to provide integrity of voters’ data.
Author’s note: Laszlo Dellei is an experienced, certified and internationally recognized InfoSec, Cybersecurity, Security, Privacy and ITSM professional, with a multidisciplinary background. Laszlo received his B.S. degree in Information Technology from the Dennis Gabor College and the MBA in Information Management specialized in Security from the Metropolitan University. Furthermore, Laszlo proudly holds, among others, the following internationally recognized credentials: C|CISO, CISA, CGEIT, CRISC, ITIL and ISO27001. Laszlo is dealing with the referred disciplines for almost 15 years. As the CEO of Kerubiel Kft, besides management tasks, he also is responsible for high‐priority operations in the following domains: Physical Security, Environmental Security, Cyber and Information Security. Laszlo also is a registered and active security expert of the European Commission. Furthermore, he is a member of the Hungarian Chamber of Judicial Experts, Gold Member of ISACA, member of the EC‐ Council, and member of John von Neumann Computer Society.
Editor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective on the topic. Today, we publish a post from Mike Hughes, providing a UK perspective.
In some ways, the UK has less to worry about when it comes to protecting the integrity of election data and outcomes than some of its international counterparts. The UK election process is well-established and proven over may years (well centuries), and therefore UK elections are generally conducted in a very basic manner. Before an election, voters receive a poll card indicating the location where they should go to vote. On polling day, voters enter the location, provide their name and address, and are presenting with a voting slip. They take this slip, enter the voting booth, pick up a pencil and put a cross in the box next to their candidate of choice. Voters then deposit this paper slip in an opaque box to be counted once polls are closed in the evening.
Pretty simple (and old-fashioned). Yet, despite the UK’s relatively straightforward election procedures, the Political Studies Association reported in 2016 that the UK rated poorly in election integrity relative to several other established democracies in Western Europe and beyond. More recently, there are strong suspicions that social media has been used to spread false information to manipulate political opinion and, therefore, election results. Consider that one of the biggest examples is the Cambridge Analytica data misuse scandal that has roiled both sides of the Atlantic, and it is fair to say that the matter of election integrity has only become more of a top-of-mind concern in the UK since that 2016 report, especially during the campaigning phase.
Rightfully so, steps are being taken to provide the public greater peace of mind that campaigns and elections are being conducted fairly. In 2017, the Information Commissioner launched a formal inquiry into political parties’ use of data analytics to target voters amid concerns that Britons’ privacy was being jeopardized by new campaign tactics. The inquiry has since broadened and become the largest investigation of its type by any Data Protection Authority, involving social media online platforms, data brokers, analytics firms, academic institutions, political parties and campaign groups. A key strand of the investigation centers on the link between Cambridge Analytica, its parent company, SCL Elections Limited, and Aggregate IQ, and involves allegations that data, obtained from Facebook, may have been misused by both sides in the UK referendum on membership of the EU, as well as to target voters during the 2016 United States presidential election process.
The investigation remains ongoing, but the Information Commissioner needed to meet her commitment to provide Parliament’s Digital Culture Media and Sport Select Committee with an update on the investigation for the purposes of informing their work on the “Fake News” inquiry before the summer recess. A separate report, “Democracy Disrupted? Personal Information and Political Influence”, has been published, covering the policy recommendations from the investigation. This includes an emphasis on the need for political campaigns to use personal data lawfully and transparently.
Social media powers also should draw upon their considerable resources to become part of the solution. Facebook, Google and Twitter have indicated they will ensure that campaigns that pay to place political adverts with them will have to include labels showing who has paid for them. They also say that they plan to publish their own online databases of the political adverts that they have been paid to run. These will include information such as the targeting, actual reach and amount spent on those adverts. These social media giants are aiming to publish their databases in time for the November 2018 mid-term elections in the US, and Facebook has said it aims to publish similar data ahead of the local elections in England and Northern Ireland in May 2019.
All of these considerations are unfolding in an era when the General Data Protection Regulation has trained a bright spotlight on how enterprises are leveraging personal data. As a society, we have come to understand that while the big data era presents many unprecedented opportunities for individuals and organizations, the related privacy, security and ethical implications must be kept at the forefront of our policies and procedures.
As I stated at the start of this article, the UK’s election system is a well-proven, paper-based process that has changed very little over many, many years. One thing is certain: sometime in the not-too-distant future, our paper-based system will disappear and be replaced by a digital system. There will then be a need for a highly trusted digital solution that provides a high level of confidence that the system cannot be tampered with or manipulated. These systems aren’t there yet, but technologies such as blockchain may be the start of the answer. Technology-driven capabilities will continue to evolve, but our commitment to integrity at the polls must remain steadfast.
Editor’s note: A recent ISACA survey found that 85 percent of technology professionals worldwide (and 86 percent in the US) are concerned about the ability of the public sector to conduct secure, reliable and accurate elections. ISACA board chair Rob Clyde explores the topic of election data integrity in more detail below.
The motivations of cybercriminals are as diverse as their forms of attacks. Many cybercriminals are after money, naturally, but plenty of other incentives exist, including the allure of exerting power and influence. Unfortunately, one of the most impactful ways to do so involves tampering with the integrity of elections, a rising concern in the United States and around the world.
While election security is not a new topic, it took on increased prominence in the US in the aftermath of the 2016 presidential election and has prominently surfaced again in the build-up to November’s midterm elections. Although allegations of nation-state interference in the US election process has commanded much of the media attention, protecting the overall data integrity of elections is a much more encompassing issue than any attempt by a nation-state to influence a particular election cycle or campaign. Working to enhance the reliability of the information systems and technology that assures data integrity in the electoral process will be an ongoing challenge requiring bipartisan attention and support from leaders at all levels of the government.
Encouragingly, this challenge is clearly on the radar of US elected officials, with a bill to establish the National Commission on the Cybersecurity of United States Election Systems and the Secure Elections Act among the efforts to drive toward solutions. A recently formed Task Force on Election Security, composed of members of the Homeland Security Committee and House Administration Committee, allowed for members from both committees to interact with election stakeholders, as well as cybersecurity and election infrastructure experts, to analyze the effectiveness of the US election system. The task force produced a final report and future recommendations, with the goal of maintaining free, fair and secure elections.
While the attention on this topic in Washington, D.C., is an important starting point, there must be extensive collaboration between federal agencies and the state officials who are charged with direct oversight of elections. Many state officials face the massive undertaking of securing elections with small IT staffs and few cybersecurity professionals on their teams. Given the high stakes involved and the growing complexities of the threat landscape, election systems require more dedicated resources to ensure the appropriate people, processes and technology are in place to stave off threats to election data integrity, whether intentional or otherwise. The federal government must provide the funding so that states are able to update vulnerable voting machines and modernize their IT infrastructures. Federal funding allowing for the training of election officials and poll workers about cyber risks would be another worthwhile investment. Further, since elections are generally run at the state level, states and federal agencies need to increase coordination to allow for real-time notifications of security breaches and threats. This could also present an opportunity for the government to tap into the capabilities of the private sector to strengthen election security.
Additionally, as the task force recommended, states should conduct post-election audits in order to ensure the election was not compromised, as well as identify and limit future risks. The implementation of post-election audits is an immediate step the government can take to limit future vulnerabilities while also strengthening public trust in the process – an important consideration that should not be overlooked.
One intriguing longer-term solution for election data integrity is the deployment of blockchain technology. Blockchain is now being embraced by many different sectors and agencies, and was recently used in West Virginia for absentee voting leading up to the midterms. Blockchain has the ability to secure a permanent record that is timestamped and signed, and can therefore not be altered in any way. Developing this cyberattack-resilient database could prove to be a critical step toward mitigating any potential manipulation or voting fraud.
While audit, governance, risk and information/cyber security professionals are charged with many important responsibilities, helping to solidify the data integrity of elections is among the most vital. In the US and around the world, fair and trustworthy elections are an indispensable component of free societies. Losing trust in the outcomes of elections would lead to a level of discord that would have a profoundly destabilizing impact. The events of the past few years have reinforced that protecting the integrity of the electoral system in this new era will require a significant investment in attention and resources. So be it. The alternative, taking our election security for granted, no longer is a viable path.
Fighting poverty and achieving a high economic growth rate are two key priorities for developing countries.
Achieving both of these goals is reliant on financial inclusion. Developing a national digital transformation strategy that focuses on transforming the traditional economy to a digitized economy is the best way to accelerate the run rate in achieving this end goal.
The journey to financial inclusion is reliant on fintechs; disruptors in the financial sector, driving innovative transformation and changing the way financial services are delivered, the medium of transactions and the approach to business analysis.
Unlike traditional financial services firms, fintechs are not tied by legacy systems which can delay progress: they can move faster toward new and innovative services by adopting new technologies and redefining standards and expectations within the industry. Fintechs can quickly deploy emerging technologies like blockchain, artificial intelligence and machine learning – technologies that will fundamentally change the world of financial services. PWC UK notes that already “Some large financial institutions are also relying on blockchain for internal transactions between territories, effectively reducing the internal cost of moving money.”
Rapid development in consumer technologies also means customers’ expectations have grown and they now expect a level of personalization and customization which can only be addressed through automation and keeping up with the pace of emerging technologies. Further, these technologies can be used to streamline customer service through the use of chatbots and automated tools. Electronic payments, biometric-enabled authentication and blockchain for digital transactions will all improve security and reduce fraud while increasing customer satisfaction – making them core to new financial services solutions.
Artificial intelligence and machine learning in particular have the ability to improve fraud detection and reduce the need for human oversight by up to 50%. Financial Fraud Action UK (FFA UK) stated this year that fraud costs the UK £2 million every day (according to 2016 figures), and experts expect to see costs reaching $32 billion yearly on online credit card fraud alone by 2020. Artificial intelligence can play a key part in detecting this, automating the process and reducing occurrences by following different approaches like oversampling, undersampling, and combined class methods.
Governments and banks are already seeing the benefits of these emerging technologies. There are two particular examples where their deployment is lowering the cost of financial transactions. In April 2018, the National Bank of Egypt announced that it has joined a large initiative focusing on the research and application of blockchain, with R3. More than 200 banks and international companies have joined this initiative.
By 2021, Dubai will be using blockchain technology for more than 50% of financial transactions, expecting to save 11 billion AED by doing so. When announcing its blockchain strategy, Dubai predicted a 300 million dollar blockchain market across the financial sector, healthcare, transportation, urban planning, smart energy, digital commerce, and tourism.
Emerging technologies readiness
The Emerging Technologies Readiness Survey, published in Egypt during August 2018 by my team, collected the responses of 91 executives from different sectors across technology, banking and fintech. The results show that almost 74% are already using emerging technologies, with almost 29% using big data, 18% machine learning, 17% artificial Intelligence, and almost 8% are using blockchain.
Figure 1: Emerging Technologies Readiness Survey
The main driver behind adopting emerging technologies was business improvements, with 62% of respondents using emerging technologies citing this.
Figure 2: Emerging Technologies Readiness Survey
Half of respondents said their companies measured the ROI after using these technologies, but a surprising 32% do not measure the ROI and almost 18% were unsure whether their company does or does not.
Figure 3: Emerging Technologies Readiness Survey
Almost 70% of respondents whose companies were yet to adopt emerging technologies in their business stated that they have plans to deploy one or more within the next five years.
Figure 4: Emerging Technologies Readiness Survey
When asked which emerging technologies they were most interested in deploying, almost 34% of respondents said they would consider blockchain, nearly 35% said artificial intelligence, 41% said big data, and nearly 30% said machine learning.
Figure 5: Emerging Technologies Readiness Survey
Embracing emerging technologies for financial inclusion in developing countries
It is clear that emerging technologies will be essential to accelerate the goals of developing countries in achieving high economic growth rates and in driving financial inclusion and a thriving digital economy. Yet, traditional Financial Services firms can’t adopt themselves easily to these emerging technologies because of their legacy systems They can, however, partner with fintechs to get the benefit of emerging technologies deployment and achieve great mutual success.
Fintechs, traditional financial services firms, technology companies and governments need to develop and build digital transformation strategies together – strategies that include a plan of secure emerging technologies deployment and that have a clear vision of how they will maximize the benefits and minimize the risks of these technologies.
Security readiness for emerging technologies
Using emerging technologies is not only beneficial in terms of innovative new financial services, but also improves the security of information systems.
At the same time, emerging technologies such as machine learning and artificial intelligence will increasingly be used for cyber-attacks and many are not yet equipped to withstand these attacks. Two-thirds of respondents to the survey see potential risks from emerging technologies, with almost 59% saying their companies also realize these potential risks. A somewhat smaller 44% said their companies have a risk mitigation plan for emerging technologies.
Figure 6: Emerging Technologies Readiness Survey
Figure 7: Emerging Technologies Readiness Survey
Figure 8: Emerging Technologies Readiness Survey
Despite the concerns around risks, most respondents could see a great opportunity for using emerging technologies to improve the level of information security at their companies, with almost 81% saying they will use emerging technologies for that purpose.
Figure 9: Emerging Technologies Readiness Survey
Editor’s note: Mahmoud Abouelhassan will provide further insights on this topic on 30 October at ISACA’s CSX Europe 2018 conference in London.
Editor’s note: Peter Weill, senior research scientist and chair of the Center for Information Systems Research (CISR) at the MIT Sloan School of Management, is an award-winning author who focuses on the role, value and governance of digitization in enterprises. Weill, who co-authored What’s Your Digital Business Model? with Stephanie L. Woerner, recently discussed enterprise digital transformation themes with ISACA Now after addressing chapter leaders at ISACA’s Global Leadership Summit in Chicago. The following is a transcript of the interview, edited for length and clarity:
ISACA Now: What are the most important building blocks for organizations in terms of creating a winning digital strategy?
Having a compelling vision to excite customers is the most important factor. There are a whole lot of digital and cultural change capabilities that you need, but you can’t do it without the vision, and then there are a series of building blocks, like Lego blocks, that are your data, your customer experience components, new ways of working, your people innovating, that make it work.
ISACA Now: You emphasize the importance of the customer voice being a driving force in making decisions. What guidance might you have for organizations to ensure that is the case in their decision-making process?
The customer voice is all about how you listen to the customer and then amplify their voice in every decision, in every activity you make. And so, data analytics, real-time connections, mobile connections, sentiment analysis, social media – these are all ways you can amplify the customer’s voice, but then you have to change the culture in an organization to hear it and use it, and that is probably the hardest part.
ISACA Now: Why is understanding life events of customers so valuable and important?
Most companies have made a successful living selling products, but in a world of ubiquitous search, you can search for the lowest product at a certain quality level in seconds. Now customers want to have a broader set of needs met, and one extremely good way of doing that is life events. Some companies use customer journeys – but they are more about how the sequence of meeting life events is enacted. Take a B2B customer – are they entering a new market? Are they doing a merger? Is there a change of CEO? Those all have needs, and there are products and services that need to be connected together to achieve the life event.
ISACA Now: What are some of the common missteps organizations make when it comes to pursuing digital transformation?
The most common misstep is once you have a great vision, to try to do a big bang. In our digital world, with all the new digital tools, we use test and learn. So, you use some lessons from Silicon Valley of MVP (minimum viable product). You try lots of things, you see what works, and once they work, you scale and integrate. That’s a very different way of operating, so that’s one of the biggest problems we see. Another is that companies feel they can do these things all themselves. Digital is a partnering world. So, how do you get better at partnering, sharing information appropriately, and using that collaboration to provide better services?
ISACA Now: Can you elaborate on the concept of a higher-value digital business model? What does that entail?
The average profitability of a supplier model is significantly less than the average profitability of an ecosystem driver model, but a much higher percentage of companies are supplier-dominant models than ecosystems. So, an ecosystem is a much higher value model, but it’s harder to achieve and there is significant consolidation amongst the players.
ISACA Now: From your interactions with boards and executive leadership, what stands out as the toughest types of decisions leaders have to make when it comes to digital transformation?
The most difficult question I hear from senior executives is ‘Do we have the right talent?,’ particularly at the senior leadership team, and we often see quite a high turnover in companies that successfully transform. But also, how do we engage the brains and energy of all the people in the company? It’s not just the senior leaders that have to transform the company; everybody has to. And so how do you engage the whole hearts and minds of everybody, and through that, change the culture from a hierarchical, linear project culture to an agile team, test-and-learn, minimum viable product culture?
ISACA Now: With a forward-looking lens, which new technologies or emerging digital themes do you see as having the biggest impact on reshaping the business landscape?
I’m a big proponent of the future of IoT because I think it will create great customer value, but with it comes all kinds of risks, and the whole cyber question around IoT, I think, is unanswered. AI will help with cyber, and I think one of the great potentials is the use of AI to do cyber analysis. I’m less concerned about the technologies themselves because they’ll change over time, but how do we provide better customer service at lower cost, and how do we avoid a world of the information rich and the information poor? One of the troubling trends we’ve already seen is a disproportionate spread of wealth in many countries, and I would love to see digital technologies create a better future for the next generation so that everybody has access to opportunities and education, and I think ISACA has a role to play in that.