Other Blogs
There are no items in this list.
Knowledge & Insights > ISACA Now > Categories
NIST Risk Management Framework: What You Should Know

Baan AlsinawiIn late December 2018, NIST published a second revision of SP800-37, Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. The revised publication addresses an updated Risk Management Framework (RMF) for information systems, organizations, and individuals, in response to Executive Order 13800 and OMB Circular A-130 regarding the integration of privacy into the RMF process.

Now that the dust has settled, we are taking another look at the update. If achieved as intended, these objectives tie C-level execs more closely to operations and significantly reduce the information technology footprint and attack surface of organizations. They also promote IT modernization objectives, and prioritize security and privacy activities to focus protection strategies on the most critical assets and systems. It also more closely incorporates supply chain risk management into the framework.

A Closer Look At The Updates
This version of the publication addresses how organizations can assess and manage risks to their data and systems by focusing on protecting the personal information of individuals. Information security and privacy programs share responsibility for managing risks from unauthorized system activities or behaviors, making their goals complementary and coordination essential. The second revision of the RMF now ties the risk framework more closely to the NIST Cybersecurity Framework (CSF). The update provides cross-references so that organizations using the RMF can see where and how the CSF aligns with the current steps in the RMF.

It also introduces an additional preparation step, addressing key organizational and system-level activities. On the organization level, these activities include assigning key roles, establishing a risk management strategy, identifying key stakeholders, and understanding threats to information systems and organizations. System level preparation activities include identifying stakeholders relevant to the system; determining the types of information processed, stored, and transmitted by the system; conducting a system risk assessment; and identifying security and privacy requirements applicable to the system and its environment.

Preparation can achieve efficient and cost-effective execution of risk management processes. The primary objectives of organization level and system level preparation are to:

  • Facilitate better communication between senior leaders and executives in the C-suite, and system owners and operators.
  • Align organizational priorities with resource allocation and prioritization at the system level
  • Convey acceptable limits regarding the selection and implementation of controls within the established organizational risk tolerance
  • Promote organization-wide identification of common controls and the development of tailored control baselines, to reduce the workload on individual system owners and the cost of system development and protection
  • Reduce the complexity of the IT infrastructure by consolidating, standardizing, and optimizing systems, applications, and services through the application of enterprise architecture concepts and models
  • Identify, prioritize, and focus resources on high-value assets and high-impact systems that require increased levels of protection
  • Facilitate readiness for system-specific tasks

The incorporation of supply chain risk management (SCRM) is another important theme addressed in the publication. Specifically, organizations must ensure that security and privacy requirements for external providers, including the controls for systems processing, storing, or transmitting federal information, must be delineated in contracts or other formal agreements. It is ultimately the responsibility of the organization and authorizing official to respond to risks resulting from the use of products, systems, and services from external providers.

Finally, SP800-37 Rev. 2 supports security and privacy safeguards from NIST’s Special Publication 800-53 Revision 5. The updated RMF document states that the revision 5 separates the control catalog from the control baselines that have been included historically in that publication. A new companion publication, NIST Special Publication 800-53B, Control Baselines and Tailoring Guidance for Federal Information Systems and Organizations, defines the recommended baselines.

In other changes to the RMF, Appendix F System and Common Control Authorizations now includes Authorization to Use (ATU) as an authorization decision applied to cloud and shared systems, services, and applications. It would be employed when an organization chooses to accept the information in an existing authorization package generated by another organization. Page 123 notes, “An authorization to use requires the customer organization to review the authorization package from the provider organization as the fundamental basis for determining risk… An authorization to use provides opportunities for significant cost savings and avoids a potentially costly and time-consuming authorization process by the customer organization.” Additionally, the appendix  addresses a facility authorization, allowing systems residing within a defined environment to inherit the common controls and the affected system security and privacy plans.

Summing It Up
SP-800-37 promotes the integration of the agency’s privacy program into the RMF, allowing the organization to produce risk-related information on both the security and privacy posture of organizational systems and the mission/business processes supported by those systems. It also connects senior leaders to operations to better prepare for RMF execution, providing closer linkage and communication between the risk management processes and activities at the C-suite or governance level of the organization and the individuals, processes, and activities at the system and operational levels of the organization. All in all, these are much-welcome changes to the framework, as better integration means tighter and more efficient controls that ensure assets are properly safeguarded by private and public sector organizations.

Author's note: Baan Alsinawi, president and founder of integrated risk management firm TalaTek, has more than two decades of experience in information technology (IT). She is a member of ISC2 and is CISSP and ITIL certified.

Continuous Security Validation

Berk AlganNo corporate executive should feel secure.

Every day, we keep hearing about yet another company getting hacked or losing sensitive data. Many enterprises do not even realize their systems are compromised until they receive an unexpected notification from an external party. Cybersecurity remains a top risk for companies and a hot topic for boardrooms.

To fend off cyber threats, most companies focus on:

  • Hiring security professionals or third parties with expertise in various security domains
  • Establishing processes such as patch management and asset management
  • Implementing various security tools and monitoring devices
  • Creating control libraries in alignment with regulations and industry standards
  • Establishing security training and awareness programs

But, how do we know our cyber defenses actually work?

Traditional Security Validation includes testing individual controls or a set of controls to ensure that they are designed appropriately and working effectively. For example:

  • Validating that a firewall is configured according to a company’s configuration standards is considered testing of a singular control.
  • Testing a set of relevant controls to verify whether the company is in compliance with the Payment Card Industry Data Security Standard (PCI-DSS) would be considered testing a set of controls.

While testing security controls in a traditional way could serve its intended purposes, the company should not feel secure solely based on traditional point-in-time control testing. The reality is that threats and an organization’s systems change on a daily basis, and a traditional control test that was effective yesterday may no longer be effective in mitigating a threat today.

Adversaries will always look for any weakness in a company’s environment, ranging from misconfigured systems to overly permissive access rules. New threats, vulnerabilities and zero-days are identified every day.

The only effective way to combat this is to think and act like an adversary.

Continuous Security Validation allows an organization to take cyber attackers’ perspective and stress-test its security stance.

While it includes elements of traditional validation methods described above, it focuses more on walking in hackers’ shoes. The chart below depicts key characteristics of Continuous Security Validation:

To implement and execute on Continuous Security Validation, a company could leverage industry best practices. A leading framework in this area is MITRE ATT&CK™ for Enterprise (ATT&CK).

ATT&CK for Enterprise is a framework that takes the perspective of an adversary trying to hack into a company using various known attack vectors. This framework provides a library of real-world hacking activities for companies to simulate in their own networking environment.

In its simplest form, an organization could pick a relevant attack vector (e.g. exfiltration over alternative protocol) from the ATT&CK Matrix and test its cyber defenses to validate that it could withstand that particular attack. They can then review and prioritize mitigation of identified gaps.

It’s important to note that internal red-teaming (an internal group taking hackers’ perspective) is a core component of this approach whereby these teams can use real scenarios and test the actual response and detection capabilities rather than just testing controls.

Continuous Security Validation will help a company: 

  • Increase its cyber resiliency by frequent testing and validation
  • Test the effectiveness of its security controls and tools in preventing specific attack vectors
  • Develop an organizational cyber threat model to focus on higher risk areas and key information assets
  • Methodically analyze identified security observations

At the 2019 GRC Conference in Fort Lauderdale, Florida, USA, to take place 12-14 August, I will further explore Continuous Security Validation and describe how a company could use it to reduce its cyber exposure. We will also review key elements of ATT&CK for Enterprise and discuss how it can be leveraged to stand up and operate a Continuous Security Validation process.

About the author: Berk Algan is a risk management executive who takes pride in building exceptional Governance, Risk and Compliance (GRC) functions and developing high-performing teams. He currently leads the Technology & Security Risk Management group at Silicon Valley Bank.

Integrating Human and Technical Networks in Organizational Risk Assessments

Charles HarryThe US government’s recent efforts to ban the introduction of specific foreign IT vendors’ equipment in government networks is emblematic of the growing concern among organizational leaders posed by global supply chains, highlighting the broad interdependencies between technical and human systems.  Organizational leaders who are seeking greater efficiencies are finding that the confluence of technical, human, and supply chain-induced cybersecurity risk requires a deeper understanding of how each of these siloed processes work together in a highly choreographed and complex system. Specifically, how do we understand and measure the risk surface of human systems for our organization?

Despite the recent tensions between the US and China on the potential threat ZTE or Huawei present to US government systems and critical infrastructure, there has been a steady evolution in guidance on how to manage broad cyber security risk. National standards, including the latest version of the NIST Cyber Security Framework (CSF), detail both the concerns and the need to account for the risks as part of a robust and comprehensive action plan for addressing those factors of enterprise risk. Yet while frameworks, guidance, controls, and other standards highlight the importance of conducting risk assessments, we often lack the methodologies for assessing not only the individual elements of risk but also how they come together in a complex system. There is a handful of methodologies, such as FAIR, that have recently emerged to begin to quantify cyber risk for specific assets, but have yet to map and integrate technical, human, and supply chain elements of cyber risk to mission functions. Research, including at the University of Maryland, is demonstrating how both human and technical systems can be defined and measured.

A central concept in modeling the cyber risk for interdependent systems is to link the mission business functions of an organization to the underlying information technology infrastructure. Each device in that mapped function has users connected to it, as well as some number of supply chains (e.g. hardware, software, data). This cascading set of interdependent technical and human networks is precisely why cybersecurity continues to be a persistent and evolving problem as new attack vectors, methods and techniques are leveraged. Laptops work with routers, servers, and other devices, but people touch each of those devices as well, and the combination of the two support mission/business functions. For example, a specific user might have access to some number of your organization’s devices that support your manufacturing line. They then maintain a certain level of access permission to each device (e.g. root privilege), they might be more inclined to click on malicious links, and finally they have exposure to the outside world (both physically and digitally) that make them targets for compromise.

As you begin to account for every mission/business function of concern to senior leadership, a seemingly dizzying array of combinations emerge to overwhelm even the most intrepid risk manager. Risks might stem from a remote attack via a vulnerability in an exposed system, a user who clicks on a spearphish, or through a vendor that does not maintain adequate controls. This technical and human complexity in the organizational attack surface is what is leveraged by threat actors to achieve their goals. While the task of mapping missions to technical and human networks seems daunting, defenders do have an advantage … you know your organization. While threat actors must discover what systems and people hold the keys to their objectives, a well-defended network can focus its resources to assess, architect, train, and defend the devices, people, and supply chains that support the most critical business functions. Reducing user-induced cyber risk to the organization in practice might therefore require us to rank users who represent greater risk to specific missions based on their device access, permission levels, propensity to be compromised, and account attack surface (number of social media accounts, e-mail, etc.). A similar activity can be performed for supply chain vendors. Can we map and rank risk of vendors based on the hardware, software, and data they provide our organization, with the ranking coming from the length of the supply chain, their vendor cyber risk posture, and the number and importance of connections they maintain to our IT network? Creating indices and ranks ordering the broad set of human-induced risk into the organization enables prioritization of training resources, technical and organizational controls, and focus by leadership on the most important relationships, personnel, and technical systems underpinning their organization.

The persistent increase in cyber events of significance stemming from the use of a complicated set of technical and human attack vectors necessitates a new approach for assessing systemic risk. Mapping missions to IT infrastructure, users, and supply chains enables a clear way to discuss and thoughtfully address any number of attack scenarios against an organization. Without this map-connecting mission with human and technical networks, we will continue to remain lost in the sea of cyber incidents.

MIT CISR Research Forum: Designing for Digital Leverage

Arno KapteynThe MIT CISR Research Forum (Europe), hosted by Heineken, recently was held in Amsterdam. As a partner of MIT CISR, ISACA was represented at the event. Presentation titles on the agenda like “Quick Look: What Is Your Digital Business Model?” by Joe Peppard, “Digitized Is Not Digital” by Jeanne Ross, “Managing Organizational Explosions During Digital Transformation” by Nick van der Meulen, and others, provided a good general sense of what the event would be all about.

Having heard a keynote address by Peter Weill on the topic of digital transformation at an earlier ISACA event helped me to formulate some thoughts prior to the event. Two questions kept coming up: “Digital transformation, this year’s hype or is this really something new?,” and “Digital transformation, how does it affect me and my customer organizations?”

During the event these questions remained with me while a lot of insightful information was shared with attendees. As may be expected from MIT CISR, both the academic value and topics of research were excellent. Discussing these topics with the event attendees (senior-level managers from various departments of the sponsor and patron organizations) helped close the gap between academic research and practical application in the various organizations represented. Ultimately, the question “How does it affect me and my customer organizations?” did not get answered, but generated a lot of food for thought. Pathfinder organizations are monetizing digital content they own or have access to by “wrapping” (adding value from digital to current products or services), selling information or improving processes. Finding creative ways to monetize their digital content may prove to be the strategic differentiator for many organizations in the coming years.

On the question “Digital transformation, hype or really new?,” the presentation from Jeanne Ross, “Digitized is not Digital” was thought-provoking. According to this presentation, digitization helps improve operational excellence by digitizing core transactions and creating discipline with back-office processes. Digital helps improve speed of business innovation by empowering people to experiment, release, and constantly enhance digital offerings. So, digital transformation is more about the business creating new and/or enhanced value streams with digital offerings, the next step in the information age.

The event did leave me with one follow-up question that did not get answered: is every (brick and mortar) organization supposed to start generating value with digital offerings in the future – even those organizations that are currently completely focused on physical products? The suggestion I was left with is that organizations not looking at their (future) digital offerings are today’s dinosaurs, waiting to become extinct. That is a concept worthy of further consideration.

Digital Ethics Rising in Importance

Chris K. DimitriadisThe innovative capabilities of technology – as well as the potency of that technology – is advancing at a remarkable pace, creating new possibilities in today’s digital economy. This is mostly wonderful, with one large caveat: we must keep in mind that just because we have the ability to deploy a new technological innovation does not mean that we should. The need to prioritize digital ethics is becoming increasingly important for all organizations that are mindful about the imprint that they are leaving on society.

The transformative ways in which new technologies – particularly artificial intelligence – are being utilized calls for deeper discussions around the ethical considerations of these deployments. Depending on the organization and its level of ambition for implementing these technologies, that might even include the need for a chief ethics officer to ensure these issues receive appropriate attention at high levels of the organization. Not every organization will have the need or the capacity to invest in a new role overseeing ethics, but virtually all organizations should have their chief information security officer – or other security leadership – devote sufficient time to anticipating and addressing how their organization’s technological innovations could be misused by those with ill intent.

Last month, the European Commission took a worthwhile step toward acknowledging this new imperative, putting forward a series of recommendations that emphasizes the need for secure and reliable algorithms and data protection rules to ensure that business interests do not take precedence over the public’s well-being. As the Commission’s digital chief, Andrus Ansip, put it, “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.” Elsewhere around the globe, the Australian government is exploring policy that would seek to ensure that AI is developed and applied responsibly. “AI has the potential to provide real social, economic, and environmental benefits – boosting Australia’s economic growth and making direct improvements to people’s everyday lives,” said Karen Andrews, the country’s minister for industry, science and technology. “But importantly, we need to make sure people are heard about any ethical concerns they may have relating to AI in areas such as privacy, transparency, data security, accountability, and equity."

While governmental agencies should absolutely play a leading role in addressing these new challenges, a more comprehensive global response is needed. Encouragingly, some corners of academia are recognizing and acting upon the challenge, with Stanford and Massachusetts Institute of Technology among the institutions that are investing heavily in human-centered AI education. Existing professionals also will need guidance on how to account for the ethical implications of AI’s accelerated usage. The potential for malicious uses of AI has generated deep concern from global researchers and industry leaders, yet seldom is given due deliberation when products are being ideated and developed. The stakes are becoming far too high to tolerate such oversights. ISACA research on digital transformation shows that social engineering, manipulated media content, data poisoning, political propaganda and attacks on self-driving vehicles are leading, top-of-mind concerns for security practitioners when it comes to threats posed by maliciously trained AI.

Emerging digital ethics concerns are impacting a wide array of sectors, many of which carry inherent public health and safety ramifications, such as military training, medical research and law enforcement. Virtually all sectors are benefiting from technology advancements with the potential to drive forward huge benefits for society, but also face serious ethical questions that should not be discounted. Published data from nearly 70,000 OkCupid users raised an after-the-fact ethical firestorm about what manner of data harvesting and public release should be considered above-board. Police increasingly are facing difficult decisions in balancing new capabilities in surveillance with the privacy rights of those they are charged to protect. While AI understandably is drawing much of the recent attention when it comes to digital ethics, the ethical challenges stemming from digital transformation extend much further. Another emerging technology, augmented reality, raises several ethical gray areas, not the least of which being how to view the blurring of lines of which aspects of an experience are real. Blockchain implementations also open the door to ethical conundrums, such as how private information recorded on a blockchain could potentially be exploited. And ethical considerations will become more magnified in the coming decade, as quantum computing advancements come into sharper focus, setting in motion new ethical and security risks involving sensitive, encrypted data.

These are just a sampling of the serious issues that professionals and their organizations need to be prepared for when it comes to ethics in the digital transformation era. Increasing adoption of AI and other high-impact technologies comes with upside worthy of great optimism, but the risks, too, are increasing. Organizations owe it to the public to make sure that the rush to innovate does not make a new deployment’s trendiness or potential profitability the only measure of whether it should be greenlighted.

Editor’s note: This article originally appeared in CSO.

Controls in the Cloud – Moving Over Isn't As Easy As Flipping a Switch

Shane O'DonnellLift and shift.

While this phrase is not new, it’s now said with regularity in relation to moving infrastructure to the cloud. Providers promise seamless transitions as if you were moving a server from one rack to another right next door. While moving to the cloud can put companies in a more secure position, proper care needs to be taken. Assuming everything is the same can be a fatal mistake, one that is happening on a regular basis.

No-brainers
From a physical security perspective, moving infrastructure to the cloud will almost always be more secure. Large cloud providers place infrastructure in state-of-the-art data centers with top-of-the-line physical security measures. Organizations do not often have the budget, time, or expertise to build their own on-premise data centers to these specifications. I have seen the full spectrum of data centers over the years (umbrellas over server racks as a control to protect from a leaky roof, anyone?). Even the most advanced data centers we see on premise do not match those of the large cloud providers.

What hasn’t changed
Requirements and basic control concepts have not changed as the proliferation of cloud infrastructure unfolds. User access, change management, and firewalls are all still there. Control frameworks such as COBIT, ISO 27001, NIST CSF, and the CIS controls still apply and have great value. Sarbanes-Oxley controls are still a driver of security practices for public companies.

What has changed
How the controls of the past are performed has changed upon moving to the cloud. Here are some common examples:

Security administration is more in-depth. Some of the most high-risk access roles in organizations, admin rights, are a main target of malicious actors. Handling admin rights in the cloud is different and needs proper due care. Knowing which roles are administrative in nature can be confusing, so it’s important to implement correctly from the start. Separation of duties in relation to key administration and key usage is essential. Having the proper tools to administer access can be daunting. Don’t assume your cloud provider will guide you through all these intricacies; plan ahead.

Perimeter security has changed. While layered security always has been important, it becomes even more important in the cloud. Recently, several news stories have appeared where breaches occur due to things like “containers being exposed to the internet” with a large cloud provider’s name associated. At first blush, I have heard most people blame the cloud provider, but most often these breaches are the cloud customer’s fault. Some important items to think about are proper DMZs for critical and/or regulated data, firewall configurations, and proper restriction of admin rights to those resources.

Securing connectivity becomes more important. Servers and other hardware won’t be sitting down the hall when moving infrastructure to the cloud. Access will almost always be remote, thus creating new security challenges. Understanding all ingress and egress points is essential, as is putting proper controls around them.

Encryption. Encrypting data will be a top concern for many organizations, as the data is now “somewhere else.” The good news is the native encryption tools of many large cloud providers are advanced, and most times data at rest can be automatically encrypted using a strong algorithm. This is a huge step up right off the bat for many companies. Because encryption is so important in the cloud, key management becomes a high-risk control. Policies, procedures, and controls around key management need to be well-thought-out.

Fear not, it’s not all bad!
While some challenges may be present as outlined above, moving to the cloud is most often a great move for an organization. Improved security, improved performance, and cost savings are only a few benefits of a cloud migration. Multiple frameworks exist to provide a secure path to cloud adoption, so organizations are not approaching this “blind.” A cloud security framework can guide you through the process of secure adoption and also provide assurance over cloud adoptions you have already performed. We are helping clients in all industries with these cloud migrations/adoptions and have some great perspective on dos, don’ts, and best practices.

Editor’s note: For more cloud-related insights, download ISACA’s complimentary new white paper, Continuous Oversight in the Cloud.

Is Your GRC Program Ready to Thrive in the Digital Economy?

Sudhakar SathiyamurthyDigital technologies have profoundly changed our lives, blurring the lines between the digital and physical worlds. From its humble beginnings, the current constellation of tools and technologies that empower organizations has grown smarter. While digitalization makes businesses intelligent and offers immense value, it also opens up a diverse range of risks. Organizations often face challenges in effectively sensing and managing digital risks and in demonstrating reasonable compliance.

The impediments inhibiting effective GRC often get reflected as operational shortcomings, such as inadequate visibility into crown-jewel assets, a siloed view of risks, risk and compliance reports not catering to the right audience, redundant approaches restraining correlation and compounding exposure of risks, poor user experience and overwhelmingly complex GRC automation. With digital transformation going mainstream, organizations that fail to keep pace with relevant GRC strategies are likely to put themselves at a competitive disadvantage.

The following list summarizes the common misconceptions about the role of GRC in the digital ecosystem:

1) Traditional risk and compliance management practices organize operations into chunks of disconnected units, often noted as disparate departments merely administering their own chores to satisfy compliance requirements, with no homogeneity between risk frameworks, risk-scoring techniques, and terminologies, leading to misconceptions and cognitive disparities of GRC. The silo model also results in wasted resources and inefficiencies due to isolated approaches. Organizations should focus on bolstering effectiveness of GRC by breaking down silos and setting common or comparable frameworks and definitions.

2) With digitalization, businesses end up processing heaps of data of all forms, ranging from users' searches, clicks, website visits, likes, daily habits, online purchases and much more, to achieve their competitive edge. With data being the juice of digitalization, this also puts the organization on a path toward malicious attacks and information thefts. Given the fast pace of digital business and the burgeoning data underpinning the processes, GRC cannot work as a separate competence outside the digital processes – instead GRC should be integrated into design of digital transformation.

3) Digitalization is making inroads with novel delivery methods, and the supply chain is too big to ignore. The burgeoning growth of third-party relationships demands credible and timely insights of the risk and compliance posture underpinning supply chain entities. Remember, your organization is only as strong as its chain of suppliers, and any weak link in the chain is an opportunity for perpetrators to intrude. GRC cannot make the cut with a checklist focus.

4) GRC should communicate in the language of the audiences to demonstrate its value. How many times have we seen a risk assessment conducted at a theoretical level, highlighting the issues that management is already aware of; or a frontline questioning the context of the requirement in the controls framework and how it applies to his jurisdiction of support; or failing to keep the board’s attention due to technically overloaded risk presentations? It all sums up into a simple yet most complex expectation of “communication.” GRC should tailor its language to its audiences to advance user experience and to demonstrate value to business.

5) As speed and agility are the key influencers of success in the digitalization journey, administering GRC in spreadsheets and shared drives results in clear diminishing value for organizations. At the same time, automation is not the ultimate fix; the use of silo technologies without sufficient collaboration is far more upsetting than manual paperwork. Remember, the goal of GRC solutions is to deliver business value by providing accurate, credible and timely intelligence of risk and compliance, rather than getting tangled in solution warfare.

Digitalization is spreading its tentacles across organization. Though organizations are challenged to find new avenues of bulletproofing GRC, successful risk practitioners are staying ahead of the game by focusing on business value creation.

Editor’s note: Sathiyamurthy will provide more insights on this topic in his “Bulletproof your Governance, Risk and Compliance program - GRC by Design” session at ISACA’s 2019 North America CACS conference, to take place 13-15 May in Anaheim, California, USA.

Author’s note: Sudhakar Sathiyamurthy, CISA, CRISC, CGEIT, CIPP, ITIL (Expert) is an experienced executive and director with Grant Thornton’s Risk Advisory Services with a broad range of international experience in building and transforming outcome driven risk advisory services and solutions. His experience has been shaped by helping clients to model and implement strategies to achieve a risk intelligent posture. Sathiyamurthy has led various large-scale programs helping clients stand-up and scale risk capabilities. He has led and contributed to various publications, authored editorials for leading journals and frequently speaks on international forums. He can be contacted at sudsathiyam@gmail.com.

Big Data: Too Valuable and Too Challenging to Be Overlooked

Chris K. DimitriadisAs the new year begins and business leaders refine their 2019 plans, how to effectively deploy technology increasingly will be a focal point of conversations in the boardroom and elsewhere throughout the enterprise. While trending technologies such as artificial intelligence, blockchain and 5G wireless networks command much of the mindshare in the new year, one technology that might no longer be deemed buzzworthy should nonetheless be a major consideration in 2019 for the C-suite and security teams alike – how to derive value while mitigating risk from big data.

The term “big data” has been in circulation for many years, but big data continues to evolve in scope and capability, especially with AI, augmented analytics and other emerging technologies enabling data to be harnessed in more sophisticated fashion. ISACA’s 2018 Digital Transformation Barometer shows that big data remains the technology most capable of delivering organizations transformative value, and it is easy to see why. The positive potential of big data is enormous, spanning virtually all industries and impacting both the public and private sectors. Of critical importance, organizations can tap into big data sets to better understand their customers and configure predictive models that allow them to be more strategic and proactive in their business planning. While the benefits for private-sector enterprises are immense, there is perhaps even more upside for society, generally. For example, big data can be used to accelerate the progress made in scientific research, improve patient outcomes in healthcare by revealing more nuanced treatment patterns and aid in the modernization of urban centers by allowing cities to more effectively govern traffic flow and the deployment of city resources. In the context of these and other high-impact innovations that are in progress, the Internal Data Corporation (IDC) made the whopping projection that worldwide revenues for big data and business analytics will reach $260 billion by 2022.

Despite the considerable enthusiasm for big data-driven projects and use cases, big data also presents a range of evolving challenges from a security and privacy standpoint. All emerging technologies introduce new threats, and the same holds true for big data. While many of the fundamentals of network security apply to big data, there are some distinct considerations when it comes to securing big data. Enterprises often turn to NoSQL databases, which allow for more scalability than conventional, relational databases, to store big data, introducing new cost and security challenges. Additionally, traditional controls such as encryption may introduce bottlenecks due to the size of the data, meaning practitioners need to become more creative in protecting big data. Data anonymization, which allows organizations to protect the privacy of individuals within a data set, is typically an effective approach, and can be especially useful when enterprises are working with third-party vendors. Further, security frameworks, particularly those that align with pertinent standards and regulations, can be utilized during big data implementation projects in order to incorporate all appropriate controls by design. These frameworks also help organizations avoid taking shortcuts in their data governance that could open the door to a large-scale breach or, on a less dramatic but still significant note, identify inefficient practices that do little to help organizations extract value from their data.

Whatever approaches are taken, enterprise leaders need to be every bit as committed to safeguarding big data as they are to the data’s collection and utilization. Without a doubt, big data presents an attractive target to attackers since big data is highly valued – after all, the bigger the data, the bigger the breach. Several attack types exist, potentially impacting both the confidentiality and the integrity of the data, meaning security practitioners must possess an overarching understanding of the threats that could impact the data. This challenge becomes all the more difficult considering the wide variety of sources and data types that encompass big data.

Although the security risks that that accompany big data can be daunting, addressing these concerns head-on is the only viable option, as big data becomes an increasingly valuable asset for enterprises to harness. Not only does the proliferation of data in the digital transformation era create new security risks, but the complexity of storing and managing the data can contribute to lower employee morale and higher turnover among IT professionals, according to a Vanson Bourne study. All of these factors call upon organizations to develop a cohesive, holistic strategy for big data, with extensive collaboration between the C-suite and enterprise security leaders. We have seen the hype around many emerging technologies ebb and flow in recent years, but the need to effectively handle big data has become a fixture on the enterprise landscape that will require ongoing attention and investment in the new year, and beyond.

Editor’s note: This post originally published in CSO.

The Business Risks Behind Slow-Running Tech

Anna JohannsonEntrepreneurs and IT leaders frequently underestimate the true power that slow technology has to negatively impact a business. It’s tempting to wait as long as possible to upgrade or replace your team’s devices; after all, every additional month you get out of a device results in measurable cost savings for the business. But all those slow, aging devices are probably interfering with your business more than you realize.

The roots of slow technology
Slow technology comes in many forms, but always has the same characteristics in common. Processing becomes slower, making it harder for employees to complete their tasks in a timely manner, and occasionally stalls productivity altogether (like when those devices crash).

Generally speaking, there are three main influencing factors that can negatively impact a device’s speed:

  • Age. First and most notably, devices tend to slow down as they get older. Their processors don’t work as efficiently, and disk fragmentation can interfere with how the device functions. On top of that, new programs tend to be designed for faster, more up-to-date machines, which means older computers can’t run them as intended—resulting in a kind of illusionary slowdown.
  • Malware. A sudden or inexplicable slowdown may be the result of malware infecting the device. In some cases, this is an easy problem to fix; a quick cleanup can instantly restore the device to full working order. In other cases, more intensive troubleshooting may be required, or the device might need to be wiped clean.
  • Improper use. Machines can also suffer tremendous slowdown if they aren’t being used responsibly. For example, if an employee spends lots of time downloading files, but never deletes those files, or if they have tons of installed programs that they never use, the computer won’t work as efficiently as it could. Employees may also misreport slow devices; if they have 39 tabs open in a web browser and one of them won’t load as quickly as they would like, the problem probably isn’t with the device itself.

The effects of slow tech
As for how that speed affects productivity, there are several areas of impact to consider:

  • Actions and tasks per day (or per hour). This is the most impactful effect, and the most obvious one. If employees face even a slight delay when attempting to interact with in-app elements, or when performing their most basic tasks, those small pieces of interference can quickly add up to compromise many hours of productivity. Depending on the severity of the problem, a slow device can cost you upwards of an hour per day, per employee.
  • Availability of new programs. Dealing with a slow device can also affect which types of programs an employee is able to run. If they feel their device is old, they may be less willing to update their existing programs (which ultimately yields a security risk). They may also intentionally avoid downloading and using new programs that would otherwise facilitate greater productivity, or new responsibilities.
  • Employee morale. Of course, being forced to tolerate a slow device can also result in decreased employee morale. Over time, your employees will grow more frustrated, aware that they aren’t working to their full potential, and that frustration will result in many hours of lost work (not to mention higher absenteeism).

Fixing the problem
So, what can you do to fix the problem?

  • Clean up any malware. First, investigate any slow devices to see what the real root of the problem is. If there are any instances of malware, make sure to remove them, and test the device again. While you’re at it, make sure your proactive defenses (such as firewalls and antivirus software) are working effectively.
  • Instruct employees on proper use. Host a seminar or send out a memo that instructs employees how to properly care for their devices, especially if they’re allowed to take those devices home as if they were personal belongings. Give them tips for how to keep their devices functioning optimally, and how to temporarily boost speed for intensive applications.
  • Invest in new upgrades. If you’re still dealing with old tech, make an effort to upgrade it. Sometimes, you can get by with a RAM upgrade. Other times, you may need to replace the device entirely. But remember—this is a long-term investment in your team’s productivity.

Correcting, upgrading, or replacing your slow technology can be both costly and time-consuming, but it’s almost always worth the effort. Not only will your team be able to utilize more resources and work faster, they’ll be happier—and that morale will almost certainly have a positive impact on your business’s profitability. Stay proactive, and take action on slow devices before they have a chance to interfere with your work.

All Talk, Little Action: AI and Digital Ethics in People Technology

Bhumika ZhaveriAs we continue the end-of-the year review on all things tech, digital ethics and the progress of artificial intelligence (AI) in people-related technologies springs to mind. People tech affects HR, recruitment and other areas that enable businesses to hire, manage and plan their key asset – people. With new suppliers coming out consistently, it is very difficult for businesses to understand which technology is ethical with regard to data, code and algorithms, versus technology that is not.

The first thing to highlight is that AI is a huge buzzword for people tech these days. However, it is abused more often than it should be, resulting in confusion for businesses that simply may not have the time to keep on top of tech or research it before buying, typically costing them huge resources. To clarify, AI has several strands, two of which are machine learning and automation. These two are significantly highest in use at the moment in people tech, whereas other forms of AI are more relevant in other sectors. As an example, autonomous cars use robotics and other relevant strands of AI.

Now, regardless of the use of AI and its specific strand, especially when it concerns algorithm-building stages, it is extremely important for every developer and tech business to not only think about “ethics” and “biases,” but to actually implement practices that would help them not only tackle their own challenges with regards to ethics and biases, but also those of their employees and users. This truly allows them to build and code for purpose-driven, value-add commercial products. Increasingly, a lot of experts are talking about this issue, from TechUK committees that I participate in, to IEEE guidelines I am part of globally. There are a lot of experts, individuals and organizations constantly talking about this important topic.

However, very little has been seen in terms of action, and so, for my part, I am “practicing what I preach.” While we are a startup, and it does add a couple hours to my time reviewing the code for new features, it is very satisfying to know that this work comes from a place of supporting users. In addition, we prioritize careful data use and management; we will strictly only use the data that helps our users with analytics (based on what our platform offers) and provides a better experience.

How can larger tech companies and software houses implement this? I believe that the larger the business, the easier it should be to have processes and resources that effectively address the desired outputs of the business vision and support customers, while also to serve as an in-house ethics and bias reviewer. This gives businesses a lot of power internally to follow guidelines drawn by governments and other organizations working actively to support this framework-building.

There is no doubt that 2019 will be a key year for growth in digitization, automation, augmented analytics and blockchain. So, I really hope that businesses stop talking about the fundamental challenges of digital and AI ethics, and start building tools and frameworks to monitor them.

About the author: Bhumika Zhaveri is a non-conventional and solutions-driven technology entrepreneur and businesswoman. As an experienced HR Technologist, she has expertise in HR and Recruitment: Technology & Programme Management for Change & Transformation. Privileged to look at challenges differently than most due to versatile life, personal and professional experiences, she is actively involved with TechUK, IEEE for data ethics, AI & digital committees and TechSheCan charter with PWC, Girls Who Code and similar organizations supporting women in stem . Currently, she is also the Tech Advisor for Resume Foundation and Bridge of Hope, while also being a founding member of Digital Anthropology.

1 - 10 Next