The institutions we all serve are inevitably going to utilize big data, if not now, soon. This is because of the power of extracting value from big data for the benefit of the products we make and the customers we serve. This can be said about almost all industries and, as we move towards technological advancement and creating new efficient ways to make our work intelligent, it is key to also think about the regulatory landscape that we must constantly assess. In my recent Journal article, I wrote about how audit professionals who work with big data, deal with global privacy implications, and handle sensitive research data require the knowledge and technical aptitude to audit the big data space to stay relevant. Almost all enterprises are now taking on big data projects, and staying compliant with growing regulatory risk requirements is causing internal compliance, risk and audit functions at these enterprises to demand auditors with these necessary skill sets.
My article introduced 3 components for working with the concepts of big data and protection of the data :
- Anonymization—The ability for the data controller to anonymize the data in a way that it is impossible for anyone to establish the identity of the data
- Reidentification—The method of reversing the deidentification by reconnecting the identity of the data subject
- Pseudonymization—The process of deidentifying data sets by replacing all identifying attributes that are particularly unique (e.g., race, gender) in a record with another. However, the data subject owner in this case (the owner of the original data set) can still identify the data directly, allowing for reidentification.
Big data will exponentially grow and, as studies show, “A full 90 percent of all the data in the world has been generated over the last two years.” The use of big data to capitalize on the wealth of information is already happening, and this can be seen from the daily use of technology platforms such as Google Maps or predictive search patterns while on a website. The conversation around the use of big data is already happening, and it is our job as auditors to become part of the conversation and help the organizations we serve navigate through the risk of managing big data.
Read Mohammed Khan’s recent Journal article:
“Big Data Deidentification, Reidentification and Anonymization,” ISACA Journal, volume 1, 2018.
Big data is a huge volume of data that cannot be treated by traditional data-handling techniques because it is mostly unstructured and complex. Thus, proper collation, coordination and harnessing of such data is necessary for relevant users, such as chief information officers, IS auditors and chief executive officers, to make meaningful decisions. My recent Journal article describes a 6-stage cycle for implementing big data for organizations, especially commercial banks. This is illustrated by the acronym DIRAPT, which stands for definition, identification, recognition, analysis, ploughing-back and training. I consider DIRAPT to be a cycle because there is a need to repeat the stages over and over:
- Definition of scope—Large volumes of data are generated per second from machines (such as automated teller machines and mobile devices). It will not make any sense to treat all data from all fronts all at once. So, banks must define the scope to be covered at a time and extract meaningful information.
- Identification of skill set—Careful selection of manpower with the requisite skills is very important before a successful implementation. Experienced staff should be picked from operations, marketing, control and other departments to contribute their skills for successful implementation. A rich blend of skilled people will go a long way to determine the success of such implementation.
- Recognition of data sources—Effective data tracking and measurement must stem from identified sources.
- Analysis of output—Examination of data (both structured and unstructured) within the scope is appropriate information for management use. The review may require specialized analytics tools such as Hadoop, NoSQL, Jaspersoft, Pentaho, Karmasphere, Talend Studio and Skytree.
- Ploughing back experience—Experience gathered over time can be built up and reused. No 2 projects will be exactly the same, but experience gathered from a previous project can always be used to help in subsequent ones.
- Training and retraining—Training is a continuum. There should be training before, during and after each cycle of implementation. Lessons learned at every stage should be well coordinated and recorded for reference purposes. Training should be encouraged at every stage.
The DIRAPT cycle can prove beneficial to organizations, such as commercial banks, to enjoy the dividends of big data.
Read Adeniyi Akanni’s recent Journal article:
“Implementation of Big Data in Commercial Banks,” ISACA Journal, volume 1, 2018.
The adoption of cloud applications (apps) and services is accelerating unabated as organizations increasingly look to take advantage of the business, collaboration and productivity benefits these apps provide. The flip side, however, is that the cloud is increasingly home to high-value confidential corporate and personal data, making cloud apps prime targets of cybercriminals.
Exploitation and malware distribution attacks in the cloud, in particular, should be treated as an arms race between cloud security firms and cybercriminals. As cybercriminals find design vulnerabilities in cloud apps that leave them vulnerable to attack and identify exploitable cloud user behaviors, cloud security vendors need to step in to fill the security gaps that cloud app vendors cannot.
Malware distribution mechanisms have become more advanced as attackers have begun using cloud storage services, such as Google Drive and Dropbox, to distribute malware. Many examples in which malware such as Petya and Cerber ransomware were distributed via DropBox and Office 365, respectively, have been encountered recently. Attackers are deploying advanced malware-hosting techniques, such as obfuscation, camouflaging and metamorphism, to hide the malicious content in cloud-hosted files and then distributing those files to a large number of Internet users as a part of drive-by download attacks.
Malware distribution is not the only threat. Cloud apps are also susceptible to targeted phishing attacks, sensitive data exposure, account hijacking and other exploits. By leveraging new security approaches, such as massively scalable cloud-based architectures and sophisticated data science and machine learning technologies, cloud security vendors, and the enterprises they seek to protect, can get a leg up on the “bad guys.” Some countermeasures and proactive steps include:
- Leverage user behavior analytics (UBA) using data-science-powered and machine learning techniques to detect anomalies in the cloud network traffic to unearth potential threats. The idea is to analyze deviations in users’ profiles based on their usage and interaction with cloud apps.
- Perform continuous monitoring of sensitive content being uploaded and shared via cloud apps and enforce policies to govern sharing of this sensitive or compliance-related data to conform with enterprise policies. Malware attacks can be either subverted or the impact can be reduced as enterprises gain visibility into both cloud app and non-cloud app channels.
- Scan files sitting in enterprise cloud apps via application programming interfaces using an advanced malware analysis engine to ensure that files do not carry any unauthorized code for distribution.
Read Aditya K Sood and Rehan Jalil’s recent Journal article:
“Cloudifying Threats—Understanding Cloud App Attacks and Defenses,” ISACA Journal, volume 1, 2018.
Recently, a vulnerability was discovered in the Wi-Fi Protected Access II (WPA2) protocol that secures most modern public protected Wi-Fi networks. This vulnerability is one that is affected by the standard itself, leaving even properly configured Wi-Fi networks exposed and vulnerable. Early reports of this vulnerability overstated the risk and downplayed the difficulty needed to exploit the vulnerability.
What Is It?
A client (or device connecting to the Wi-Fi network) establishes a connection to the Wi-Fi with a handshake, a method used to authenticate it to the Wi-Fi network. This handshake consists of back-and-forth communication between the devices to ensure that both the client and access point (Wi-Fi network) have the proper credentials to allow it to communicate on the network. Through this handshake, an attacker can manipulate the handshake in a manner that the Wi-Fi network seems to receive communication that the client device has been authorized on the network, thus allowing the attacker to connect and gain access.
How Does This Affect Me?
This affects you because most public Wi-Fi networks and most private home Wi-Fi networks use the WPA2 protocol that this vulnerability is used against. This currently affects millions of home users and many small businesses around the world. Those with home Wi-Fi should be aware that this vulnerability can affect you. And remember, most people have smartphones, and smartphones often are connected to Wi-Fi networks.
Should I Be Concerned?
Yes, you should be concerned, but you need to know that executing an attack at this level requires someone with a fair amount of IT experience and the need or desire to access your network, and the attacker must be in close proximity to your Wi-Fi network. Most home users have a low risk of this actually affecting them; however, many of us use public Wi-Fi networks at places such as coffee shops, hotels, small businesses and other popular businesses offering free Wi-Fi. Such places have many users who can simply be sitting there blending in with others who, with such an attack, are now able to access private and confidential data that maybe stored on your computer. Once an attacker is on your network, they probably have enough knowledge and experience to search for data, manipulate your computer, turn on a security camera or even adjust your thermostat—it just depends on the extent of your Wi-Fi network and devices connected to it.
How Can I Protect Myself?
The first step to take is to be aware of public Wi-Fi networks and only connect to them when your need is great. Public Wi-Fi networks should always be a concern when you connect to them as they do not offer any security to you as a user, and the devices present on the network are unknown and can be malicious. Second, always make sure that when passing personal or confidential information you are connecting to a secured service. Secured services are typically identified by https in the URL or identified with a lock located in the URL address window. If you are unsure, contact the company or service in question and ask if they offer a secured connection and if that connection is secured using TLS 1.2. If you are a more sophisticated user, you can use a virtual private network (VPN) connection or one that encrypts your communication and secures the data at all times. Most VPNs are paid services and offer support if you decide to utilize that option. Finally, if you have the ability to utilize a hard-wire Ethernet connection, do so. Using a physical Ethernet connection when the security is unknown adds a layer of security, and this connection method is not affected by this vulnerability.
How Do I “Fix” This?
Rest assured the vendors that provide the hardware and software to make Wi-Fi connections happen are scrambling to issue a patch or update that will remediate the vulnerability. Such companies, including Apple, Microsoft and Google, have issued fixes to remediate this, but some fixes may take time to get to many commonly used devices such as tablets and smartphones. However, any device that connects to your Wi-Fi network should be considered and evaluated for a firmware update. Firmware updates change the configuration of the device in a manner meant to enhance or secure it. One of the best ways to “fix” or remediate this vulnerability is to check and apply any updates issued by the manufacturer. If you are unaware of what or how to do this, please refer to the documentation provided with the device, the manufacturer’s website, the place of purchase or a local IT expert with computer security knowledge.
Stay safe on the Internet! Be sure you know what you are doing and where you are looking.
Chris Evans, CISM, CRISC, CISSP, PCI ISA, is the security and compliance manager at ISACA and has been with the organization for more than 11 years. Chris has nearly 20 years of information technology experience with his interest and passion dedicated to information and data security. He has held information security leadership positions in companies including Andrew Corporation (manufacturing), Edward Hospital (healthcare) and Transamerica/General Electric (finance).
Ransomware holds a tight grip on its victims and their most valuable data and is a global epidemic reaching all corners of the world.
The most commonly used infection vectors used by ransomware are email attachments, links in emails, compromised websites and malvertising. The first type, attacks via email attachments, can be intercepted by a security or gateway appliance before a user even receives the lure.
When an attack is using a website that security products have already identified as having been compromised or hosting malicious behavior, it can be blocked by looking at the domain or IP used in the link embedded in the email or the URL visited by a user. In practice, however, simple blacklisting approaches suffer from the relatively short lifespan of these drive-by landing pages.
To cope with this problem of blacklisting short-lived content, security solutions must find the attack “on the wire.” This means that the system either proactively probes for the content of a website or it waits until a real user is tricked into following the link to the exploit site and finds the attack in the live traffic.
However, not all attacks make use of exploit kits; often, victims are simply tricked into downloading and running the ransomware payload. Thus, security technologies need to intercept these downloads and evaluate whether the file is safe to be opened by a user—typically by running the program inside a sandbox.
As ransomware evolves, it is imperative for enterprises to adopt solutions that intercept ransomware on the wire to protect their users from these emerging and ongoing attacks.
Read Clemens Kolbitsch’s recent Journal article:
“Evasive Malware Tricks: How Malware Evades Detection by Sandboxes,” ISACA Journal, volume 6, 2017.
In my recent Journal article, I covered how organizations can leverage information governance (IG) programs to enable change and instill a culture of security. With today’s reality of increasing global data privacy regulations and unrelenting data breaches, sound data management and security are more important than ever before. In the face of these challenges, one of the most effective things organizations can do is enable true change, weaving security and privacy into the fabric of their cultures. Once that has been achieved, enforcement of the established programs and policies is equally important so that the hard work was not futile.
Maintaining change and enforcing adoption of new processes is critical to shaping a culture of security that grows and strengthens over time. When employees understand that participation with training programs or cooperation with new policies will boost their performance ratings or compensation, they are much more likely to adopt and commit to the changes. Some guiding best practices can be put in place at the outset of an IG initiative that will support long-term enforcement. These include:
- Cross-functional support—To be successful, IG must be a cross-stakeholder initiative with sponsorship from legal, compliance, security, IT and records departments.
- Executive sponsorship—An IG project simply cannot be successfully implemented —or enforced—without C-level involvement.
- Change management—The course of changing business processes should be rooted in compliance—change becomes a major challenge in large organizations where a wide range of priorities and personality types exist.
- Training—Computer-based training on new technologies and policies should be mandatory for all users, and it should include education around the implications of security breaches, the cost they impose on the organization and how to prevent them.
- Strategic technology implementation—Every technology evaluation that impacts the company’s data in any way should involve the legal and/or e-discovery team in addition to records, IT and compliance.
In many cases, it is not that employees are ambivalent about security. Once they have been educated about the overall importance of security to the long-term health of the company, most employees comply with new policies and embrace the newly formed culture.
Read Sean Kelly’s recent Journal article:
“Instilling a Culture of Security Starts With Information Governance,” ISACA Journal, volume 5, 2017.
Most of us have gone through the shocking realization that compliance certification does not mean that our environment is secure. We are forced to remember that security and compliance are different results. The question that comes up is: Is the compliance certification worth the effort it if it does not provide security? To answer this question, it is important to understand the relationships that can develop between security and compliance. There are 4 combinations of relationships.
In the first scenario, the enterprise is neither compliant nor secure. Security compliance is not mandatory and the team took advantage of this factor. It was only the huge impact of ransomware that revealed the need for compliance and security in the environment. The executives had a limited view of importance of data security with painful results. The recommended resolution for this issue is that all responsible organizations need to understand that security is an obligation to customers.
In the second scenario, the enterprise is secure, but not compliant. The organization being examined was defense-related and secure in a limited way. The company had the latest firewalls. They were, therefore, surprised when the organization lost data due to malware. Data security is based on a combination of people, processes and technologies working together to provide a better security posture.
In the third scenario, the enterprise is compliant, but not secure. The organization’s sensitive devices, i.e., point of sale devices, were secured by locks and monitored by cameras as was required by the Payment Card Industry Data Security Standard (PCI DSS) compliance requirement. The organization, therefore, met the compliance certification conditions. However, the locks were of lower quality and were easily opened, and the camera resolution was so bad that nothing was recorded in the darkness. Root cause analysis shows that the organization had installed locks and cameras thus meeting the security standards requirements, but by opting for cheaper options, it had disregarded the spirit of the framework. The recommended resolution for this scenario is that an organization should aim to satisfy the spirit of the security standard by using quality controls.
In the fourth scenario, the enterprise achieves the the ultimate goal—combining compliance certification to achieve maturity in security. This enterprise maintains an active compliance status supported with a culture of security. The steps to achieve this synergy are:
- Achieve compliance certification.
- Obtain support from executives.
- Do a self-assessment of your status quo.
- Create a roadmap.
- Bridge the gap by remediation.
- Evolve from formal certified compliance by implementing data security at a program level.
- Adopt and create a governance model.
The ransomware-like breaches that affected even organizations that were compliant has made it relevant to review the relationship between formal security certification and actual security status. We can learn to synergize the power of both strategies by using certification as a milestone on the strategic roadmap to security state. Instead of thinking in terms of security versus compliance, a better option is to achieve security via compliance.
Read Tony Chandola’s recent Journal article:
“Compliant, Yet Breached,” ISACA Journal, volume 5, 2017.
Being a banker, I strongly consider blockchain technology to be a technology juggernaut that is going to transform the financial services sector by increasing transaction efficiency, transparency and security while reducing costs. Through the distributed ledger mechanism, blockchain technology will eliminate the need to have intermediaries for the end-to-end trading process. This will certainly attract investment banks as it will help them reduce the costs involved in the trading process and significantly increase transaction speed.
Also, the implementation of blockchain technology will help banks integrate ledger and processing systems, reduce reconciliation activities, and streamline clearing and settlement. The benefits of blockchain applications are predicted to be significant. According to one financial magazine, blockchain technology could help banks reduce infrastructure costs related to cross-border payments, securities trading and regulatory compliance by US $15 billion to US $20 billion per year by 2022. The high volume of financial technology startups emerging across the globe use blockchain platforms extensively to power digital currencies, strengthen transaction security and create decentralized markets.
Despite of the many unique selling proportions of blockchain technology, it still needs to get into the lines of robust payment platforms, such as SWIFT. To become a trusted partner of all the stakeholders involved in financial transactions, blockchain technology needs to evolve further to give the required confidence by assuring the security and reliability of the platform. The future success of blockchain implementations in the banking sector will be based on the trust that regulators and banks develop have in the blockchain technology platform.
Read Vimal Mani’s recent Journal article:
“A View of Blockchain Technology From the Information Security Radar,” ISACA Journal, volume 5, 2017.
Secure Shell (SSH) is everywhere. Regardless of the size, industry, location, operating systems in use or any other factor, chances are near certain (whether you know about it or not) that it exists and is in active use somewhere in your environment. Not only does SSH come “stock” with almost every modern version of UNIX and Linux, but it is in a normative mechanism for systems administration, a ubiquitous method for secure file transfer and batch processing, and a standard methodology for accessing virtual hosts whether they are on premises or in the cloud.
Because of this ubiquity, SSH is important for assurance, security and risk practitioners to pay attention to. There are a few reasons why this is the case.
First, configuration. SSH can be complicated to configure, and incorrect or inappropriate configuration translates directly to technical risk and potential security issues. Why is it complicated? The configuration parameters border on the arcane, and they require knowledge of the underlying protocol operation to make sure strong selections are made. These configuration choices are highly dependent on both environment and usage, so what might be robust enough for one use case might be insufficient for another. Likewise, the client and the server (e.g., solid-state hybrid drives) have separate configuration options, and each option directly impacts the security properties of the usage.
Second, usage. Usage tends to be niche and tends to grow organically over time rather than (usually) being “deployed” in a planned-out, systematic way. It is natural that this happens because the number of SSH users in the organization is relatively small (most consisting of operations folks), the tool itself is ubiquitous (coming as it does “stock” on multiple platforms and (because it is a security-focused tool) it is sometimes viewed with reduced skepticism by assurance and security teams. These factors serve to make it less “visible” from a management point of view, meaning very often, organizations do not systematically analyze potential risk areas associated with SSH, evaluate the security properties of their existing usage or otherwise systematically examine configuration and other parameters.
Finally, it makes extensive use of cryptography. By virtue of how the protocol operates, cryptographic keys are integral to the protocol operation, and choices are available about how the cryptography operates, how keys are managed and distributed, and numerous other considerations. As we all know, managing cryptographic keys can be challenging and it is critical to get it right for the security of the usage to be preserved, and cryptography generally can be a subject area difficult to get right.
For these reasons, it is important that organizations pay attention to their SSH usage the same way that they would any other technology that they use. There are some specific practical considerations that organizations should address and important questions to ask themselves around usage, configuration and maintenance of SSH. ISACA’s recent guidance Assessing Cryptographic Systems lays out the general considerations for assessing a cryptographic system, but specific considerations for SSH remain, for example, specific configuration options for SSH and key management issues specific to SSH.
To help practitioners work through these issues, ISACA has published SSH: Practitioner Considerations. The goal of the publication is to give security, audit, risk and governance practitioners more detailed guidance about how to approach and evaluate SSH usage in their environments.
Ed Moyle is director of thought leadership and research at ISACA. Prior to joining ISACA, Moyle was senior security strategist with Savvis and a founding partner of the analyst firm Security Curve. In his nearly 20 years in information security, he has held numerous positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers and senior security analyst with Trintech. Moyle is coauthor of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as an author, public speaker and analyst.
As an IT auditor at a software company, I discovered that security vulnerabilities in our bespoke product had not been getting released to clients on a timely basis. We had been doing penetration tests for years, but obtaining the penetration test report had not translated to the fixes being released to the users. Our clients remained exposed to known vulnerabilities, a situation that meant my employer was assuming all potential liability for the situation.
There were, it turned out, many things that slowed delivery of the fixes. Some factors were organizational and some were technical. I address the organizational challenges of client resistance and lack of internal commitment in my recent Journal article. But I will offer an additional insight for readers of Practically Speaking on overcoming technical complexities in patching a bespoke software product.
The technical complexities in the environment were nearly endless. Some penetration test findings applied only to certain versions of the software. Some fixes were beyond the capabilities of the development platform and required extensive software workarounds. Other fixes required a minimum browser level that was beyond a client’s reach. Sometimes a peculiarity of the client environment prevented one fix or another from working, e.g., a homebrewed single sign on or bespoke antivirus configurations could hamper the rollout of bug fixes. These complexities prevented the patch bundle from each year’s penetration test report from being deployed into the production environment.
The problem turned out to be both the vendor and the client believing each year’s findings could be resolved via a monolithic patch. By bundling the fixes, we greatly increased the likelihood that some complexity would render the patch unsuitable or undeliverable to a given client. Working with the developers, we devised a matrix that broke down each year’s penetration test results, with a row for each distinct finding in each report. We worked with the product owners to understand which finding applied to which version of the software. When a software fix was created, we recorded the version control branch number for the fix against the relevant finding. When a release was scheduled for a client, we worked with the project management office to ensure that the relevant fixes got scheduled.
It was messy and labor-intensive, but it worked. Supported by a strong version control system and a documented package release program, a reliable program for tracking penetration test fixes to production was put into effect. In time, we eradicated client exposure to known vulnerabilities, resolved our employer’s potential liability and were ready for each year’s fresh batch of findings.
Read Michael Werneburg’s recent Journal article:
“Addressing Shared Risk in Product Application Vulnerability Assessments,” ISACA Journal, volume 5, 2017.