@ISACA Volume 23  15 November 2017

Securing Artificial Intelligence


Bruce R. Wilkins Artificial intelligence (AI) encompasses a wide range of technology and techniques that are often overgeneralized and misunderstood by the public. It is often thought that there is no way to secure an AI application that is performing cognitive processing by learning and adapting decisions in real time. Securing AI would be like trying to secure a person. As security professionals, we know that statement is not true, but what is the truth?

If we divide the problem like most security problems into general controls and application controls, we can immediately see the fallacy of the previous statement. All systems have general controls, and those general controls can be configured as securely as allowed by the configuration tools. However, it should be noted that if the AI problem set being solved is to configure itself, then there is the possibility that the AI application could misconfigure itself. This is easily solved by removing privilege from the AI application or modifying its unwanted behavior. The more interesting general controls are whitelists and bluecoat filters. In these examples, an AI application could be taught to discover content. When it encounters a URL that it cannot reach, it would immediately go through permutations to usurp the whitelist or bluecoat filter. Its success would be both a victory and a failure. For this discussion, let us remove these obvious extreme ends of the spectrum.

We can divide AI into 2 major categories: AI techniques that use a fixed solution space and those that do not. A solution space can easily be explained as a node network, convolutional neural network, nodal tables or other decision tree that represents all the combinations that the AI application is going to execute based on the data with which it is presented. The solution space can remain static at runtime or it could be modified and become dynamic. AI that uses a dynamic solution space focuses on an area of AI study called cognitive AI. This area focuses on getting the computer to reason like a human being. This is not pattern recognition, language processing or visual process. This area allows the computer to truly learn new knowledge and apply that knowledge to its decision-making. This creates a unique issue for the security engineer. If the AI application is constantly learning and its cognitive paths cannot be predicted, then it poses the following question: How do we know that the AI application is working properly? We do not know, and programmers probably do not know either. But what is more important than the AI working improperly is understanding the complexity of why the AI application chose a given decision sequence.

The more popular AI applications are fixed solution space applications. These are expert systems, deep reasoning and deep learning. These type of AI applications work as a “learning system” and are presented with hundreds, thousands and millions of data examples for a given problem. These problems can be identifying beautiful faces, finding a truck in an image, hearing a given word, a certain type of radio signal or any other large data problem. So, the learning software processes the learning data to place acceptable patterns into its solution space. It then understands what a beautiful face is, what a truck is, distinguishes words or identifies a radio signal. This solution space is then put into the environment and tested. While being tested, the AI application does not modify its solution space. If you want to teach the AI application, you go back to the learning system, provide additional data and then move that solution space into the test environment. Finally, once the AI application is providing acceptable results, that solution space is moved into the operational environment.

Of the 2 AI approaches to problem sets that have been described, most AI applications use the latter. Although new solution spaces are brought forward into the operational system quite frequently, it is possible to determine what the application is doing well and not doing well and why.

If you are fortunate enough to work on AI applications, here are some points to consider:

  • General controls are general controls. These configuration settings are often looser than normal IT applications because of the middleware being used.
  • Middleware, such as TensorFlow and Caffe, needs to be secured and allocated with fixed directories and privileges within the overall system. In many cases, this means the learning system is hosted on a separate hardware suite or virtual machine (VM).
  • Do not go operational using laboratory software such as MathLab. These products tend to be slow and have all the security problems associated with an interpreter at runtime.
  • The saying is, “Take your large data application to the data. Do not try and bring large data to the application.” Regardless of the approach your organization chooses, there are threats and vulnerabilities that will come exceedingly close to the internal portion of your system. You must understand the data flow from all sources into the AI application, the environment the AI application is being hosted on and how hostile data are controlled so that they cannot inject errors into processing.
  • Clearly define the security boundary. This is critical when securing AI cognitive processing. It is important to ensure that as security engineers, we are not held accountable for the correctness of these types of applications. The good news is that cognitive processing has become a marketing word and often the system is not actually performing cognitive processing.
  • It is critical to have a large test set that exercises the solution space before its released into the operational AI application. Even then, you will see cases where the system failed to perform.
  • Understand the importance of corporate proprietary data and how they relate to the output of these systems. Often the product of these systems can be exactly what your organization is marketing. As a result, each solution space becomes sensitive data to your organization.
  • Good configuration management (CM) on the solution space is critical. AI developers have been known to want to update their solution space daily. This pseudo software needs to be controlled like high-order language.

AI applications do boring and repetitive tasks well. In addition, in situations where the problem set is a bounded problem, they do well with “cognitive processing.” However, the belief that AI is going to replace humans is a long way off. Most operational AI applications that make critical decisions on complex, unbounded problems rely on a human to make the final decision. In this capacity, they are great decision aids and do accelerate a human’s ability to process a far greater amount of data than previously processed using traditional technologies.

Bruce R. Wilkins, CISA, CRISC, CISM, CGEIT, CISSP, is the chief executive officer of TWM Associates Inc. In this capacity, Wilkins has the opportunity to provide his customers with secure engineering solutions for innovative technology and cost-reducing approaches to existing security programs.


Audit and Assure Windows File Servers With This Audit Program

Audit and Assure Windows File Servers With This Audit Program
Source: Henry Oude

Servers provide a variety of services including, but not limited to, email, print or file. Based on these functional descriptors, file servers provide shared access. As a result, the vulnerabilities associated with shared access and data storage are important to file server programs. For Windows servers, it is especially noteworthy that Windows Server 2008, 2012 and 2016 are supported by Microsoft. For those organizations running versions prior to 2008 (generally Windows Server 2003), the security risk rises because Microsoft no longer patches and fixes vulnerabilities in those versions. As a result, the enterprise may incur additional operating costs as it identifies (and sometimes purchases) its own vulnerability solutions. There may also be compliance implications in running an unsupported version of Windows. For example, under section 6.2 of the Payment Card Industry Data Security Standard (PCI DSS), there is a requirement that organizations complying with PCI DSS must “ensure that all system components and software are protected from known vulnerabilities by installing applicable vendor-supplied security patches.”

By conducting an audit touching on the following areas, IT auditors can help organizations gain assurance that timely identification and resolution of server vulnerabilities allows the organization to continue to meet its business objectives. When auditing a Windows File Server, the IT auditor should consider the following audit objectives:

  • Limitations on administrator and administrator-equivalent access based on the principle of least privilege
  • Mitigation of network risk through secure firewall configuration, Transmission Control protocol (TCP)/Internet Protocol (IP) settings, port restriction and other secure administration
  • Operating system security through use of BitLocker for disk encryption

Conducting a formal assessment allows an enterprise to know where its controls are working as intended and where areas for improvement exist. The ISACA Windows File Server Audit/Assurance Program provides IT auditors with the tools to successfully assess the risk associated with Windows File Servers. This audit program can be downloaded for US $25 for members and US $50 for nonmembers by visiting the Windows File Server Audit/Assurance Program page of the ISACA website.


Find Perspective on Vulnerability Approaches


Some debates are eternally fought: Coke vs. Pepsi, regular vs. extra crispy and, in the security world, full vs. responsible vs. selective disclosure.

While there has always been a robust debate on how much to disclose about vulnerabilities (not to mention when and to whom), recent events have thrown the debate back into the forefront. For example, the EternalBlue server message block (SMB) vulnerability (CVE-2017-0144) was known for some time by the US National Security Agency (NSA) before the rest of the world was made aware. In this case, notification was controlled initially (to Microsoft) and subsequently to the general public via release by the Shadow Brokers hacker collective (believed by many to be a Russian intelligence organization). Since EternalBlue was the principle propagation vector for the WannaCry malware, the question remains about how much damage could have been prevented had disclosure occurred sooner.

It is a thorny issue, and one where the risk dynamics are not always clear. On the one hand, more complete disclosure increases the ability of the general population to take active countermeasures (via patching or through compensating controls), while more narrow disclosure potentially decreases risk for those who cannot (or choose not to) patch quickly. Failure to disclosure entirely preserves the utility of the exploit as an offensive capability.

To help gain perspective about both sides, ISACA asked 2 experts—each on opposite ends of the disclosure spectrum—to share their opinions about the issue. They are both passionate about their points of view and have decades of experience. It is hoped that, by sharing both points of view, practitioners can both understand their own position better, think through the position of the other side and, ultimately, end up with a more nuanced understanding of the issue. Read the Journal online-exclusive articles “Does Fully Disclosed Mean Fully Exposed,” and “Exposing the Fallacies of Security by Obscurity.”


Understanding Threat Intelligence

Understanding Threat Intelligence
Source: Andrew

In ISACA’s State of Cyber Security 2017 study, 80% of respondents reported feeling it was “likely” or “very likely” that they would experience an attack in 2017, and 53% reported an increase in attacks in 2016. With the threat landscape becoming increasingly hostile, enterprises continue to look for ways to effectively detect and respond to threats.

By modelling, observing and understanding the cyberthreat landscape, organizations become better able to protect digital assets. Threat intelligence in this context refers to the systematic gathering of evidence about the threat environment, analysis of the evidence and, ultimately, the utilization of that analysis to minimize risk. ISACA has released the ISACA Tech Brief: Threat Intelligence to help practitioners explain threat intelligence to business partners and help them recognize the benefits that utilizing intelligence can bring.

This complimentary tech brief contains insights on how threat intelligence benefits the enterprise, illustrates the types of threat intelligence available and their related challenges, takes a look at the future of threat intelligence, and highlights questions an enterprise should ask when evaluating threat intelligence solutions. Expert insights are also included. This is the third tech brief in a recurring series, intended to offer a quick overview of a topic at a nontechnical level. Tech briefs are a great resource for IT professionals to use when educating their business partners on the basics of a technology that might be beneficial in their industry.

To learn more and download this tech brief, visit the ISACA Tech Brief: Threat Intelligence page of the ISACA website.


Mitigate Cyber Threats With Wapack Labs Reports


Many security practitioners think it is likely or very likely that they will suffer a cyberattack. In fact, 80% of those who responded to ISACA’s State of Cyber Security 2017 reported this feeling for 2017. Threat intelligence provides enterprises with information on potential targets, suspect attackers and the methods they are using, and specific indicators of compromise of which to be aware.

To stay on top of cyber security in your organization, ISACA is providing weekly downloadable threat analysis reports from Wapack Labs. These reports will help provide you with relevant, timely content that will better enable your enterprise to prepare for and mitigate threats. These reports provide early warning threat detection information based on Wapack’s Internet surveillance operations; data gathering; and analysis of economic, financial and geopolitical issues.

To download the reports or learn more about other offerings from Wapack Labs, visit the Cyber Threat Intelligence Solutions page of the ISACA website.


White Paper: Extend Your COBIT 5 Application to Data Governance


Data maintenance and management are becoming ever more complicated. Data environments and internal data requirements continue to change rapidly. If data cannot be kept accurate, timely, reliable and secure, risk may increase across business, operational and compliance domains. COBIT 5 provides definitions, good practices and modeling to assist practitioners in dealing with the critical role of data within the enterprise. Strong management provides the underpinning of good data governance, of which COBIT 5 enablers are relevant and applicable.

The Getting Started with Data Governance Using COBIT 5 white paper extends the coverage of COBIT 5 enablers to data governance by leveraging guidance found in COBIT 5: Enabling Information. The COBIT 5: Enabling Information publication provides IT and business stakeholders with COBIT 5 framework guidance and a comprehensive information model that is based on the generic COBIT 5 enabler model and that addresses all aspects of data and information.

This white paper explores the design and delivery of governance for data operations and describes the enablers that not only inform data governance and management, but also help to address common issues. You can access the complimentary ISACA white paper on the Getting Started with Data Governance Using COBIT 5 page of the ISACA website.


New ISACA Research Reveals a Governance Gap


More than 90% of senior business leaders agree that strong technology governance contributes to improved business outcomes and increased agility, according to ISACA’s latest survey results, available in the report “Better Tech Governance Is Better for Business.” Yet despite recognizing the link between governance and outcomes, a governance gap still exists. Many business leaders (69%) report that their leadership and board of director teams need to establish a clearer link between business and IT goals.

The survey also found that:

  • Only about half of respondents (55%) say their organization’s leadership team and board are doing everything they can to safeguard their organization’s digital assets and data.
  • Of those affected by the EU General Data Protection Regulation (GDPR), only 32% of respondents are satisfied with the progress their organization has made to prepare for the GDPR deadline.
  • Most organizations are using a governance framework (including the 28% that use COBIT), but 1 in 5 do not.

To view the full survey results, a related report, infographic and case studies, visit the Better Tech Governance Is Better for Business page of the ISACA website.