Other Blogs
There are no items in this list.
ISACA > Journal > Practically Speaking Blog > Categories
Steps to Enforcing Information Governance and Security Programs

In my recent Journal article, I covered how organizations can leverage information governance (IG) programs to enable change and instill a culture of security. With today’s reality of increasing global data privacy regulations and unrelenting data breaches, sound data management and security are more important than ever before. In the face of these challenges, one of the most effective things organizations can do is enable true change, weaving security and privacy into the fabric of their cultures. Once that has been achieved, enforcement of the established programs and policies is equally important so that the hard work was not futile.

Maintaining change and enforcing adoption of new processes is critical to shaping a culture of security that grows and strengthens over time. When employees understand that participation with training programs or cooperation with new policies will boost their performance ratings or compensation, they are much more likely to adopt and commit to the changes. Some guiding best practices can be put in place at the outset of an IG initiative that will support long-term enforcement. These include:

  • Cross-functional support—To be successful, IG must be a cross-stakeholder initiative with sponsorship from legal, compliance, security, IT and records departments.
  • Executive sponsorship—An IG project simply cannot be successfully implemented —or enforced—without C-level involvement.
  • Change management—The course of changing business processes should be rooted in compliance—change becomes a major challenge in large organizations where a wide range of priorities and personality types exist.
  • Training—Computer-based training on new technologies and policies should be mandatory for all users, and it should include education around the implications of security breaches, the cost they impose on the organization and how to prevent them.
  • Strategic technology implementation—Every technology evaluation that impacts the company’s data in any way should involve the legal and/or e-discovery team in addition to records, IT and compliance.

In many cases, it is not that employees are ambivalent about security. Once they have been educated about the overall importance of security to the long-term health of the company, most employees comply with new policies and embrace the newly formed culture.

Read Sean Kelly’s recent Journal article:
Instilling a Culture of Security Starts With Information Governance,” ISACA Journal, volume 5, 2017.

Examining the “Compliant, Yet Breached” Phenomenon

Most of us have gone through the shocking realization that compliance certification does not mean that our environment is secure. We are forced to remember that security and compliance are different results. The question that comes up is:  Is the compliance certification worth the effort it if it does not provide security? To answer this question, it is important to understand the relationships that can develop between security and compliance. There are 4 combinations of relationships.

In the first scenario, the enterprise is neither compliant nor secure. Security compliance is not mandatory and the team took advantage of this factor. It was only the huge impact of ransomware that revealed the need for compliance and security in the environment. The executives had a limited view of importance of data security with painful results. The recommended resolution for this issue is that all responsible organizations need to understand that security is an obligation to customers.

In the second scenario, the enterprise is secure, but not compliant. The organization being examined was defense-related and secure in a limited way. The company had the latest firewalls. They were, therefore, surprised when the organization lost data due to malware. Data security is based on a combination of people, processes and technologies working together to provide a better security posture.

In the third scenario, the enterprise is compliant, but not secure. The organization’s sensitive devices, i.e., point of sale devices, were secured by locks and monitored by cameras as was required by the Payment Card Industry Data Security Standard (PCI DSS) compliance requirement. The organization, therefore, met the compliance certification conditions. However, the locks were of lower quality and were easily opened, and the camera resolution was so bad that nothing was recorded in the darkness. Root cause analysis shows that the organization had installed locks and cameras thus meeting the security standards requirements, but by opting for cheaper options, it had disregarded the spirit of the framework. The recommended resolution for this scenario is that an organization should aim to satisfy the spirit of the security standard by using quality controls.

In the fourth scenario, the enterprise achieves the the ultimate goal—combining compliance certification to achieve maturity in security. This enterprise maintains an active compliance status supported with a culture of security. The steps to achieve this synergy are:

  1. Achieve compliance certification.
  2. Obtain support from executives.
  3. Do a self-assessment of your status quo.
  4. Create a roadmap.
  5. Bridge the gap by remediation.
  6. Evolve from formal certified compliance by implementing data security at a program level.
  7. Adopt and create a governance model.

The ransomware-like breaches that affected even organizations that were compliant has made it relevant to review the relationship between formal security certification and actual security status. We can learn to synergize the power of both strategies by using certification as a milestone on the strategic roadmap to security state. Instead of thinking in terms of security versus compliance, a better option is to achieve security via compliance.

Read Tony Chandola’s recent Journal article:
Compliant, Yet Breached,” ISACA Journal, volume 5, 2017.

The Future Looks Promising for Blockchain Technology

Vimal ManiBeing a banker, I strongly consider blockchain technology to be a technology juggernaut that is going to transform the financial services sector by increasing transaction efficiency, transparency and security while reducing costs. Through the distributed ledger mechanism, blockchain technology will eliminate the need to have intermediaries for the end-to-end trading process. This will certainly attract investment banks as it will help them reduce the costs involved in the trading process and significantly increase transaction speed.

Also, the implementation of blockchain technology will help banks integrate ledger and processing systems, reduce reconciliation activities, and streamline clearing and settlement. The benefits of blockchain applications are predicted to be significant. According to one financial magazine, blockchain technology could help banks reduce infrastructure costs related to cross-border payments, securities trading and regulatory compliance by US $15 billion to US $20 billion per year by 2022. The high volume of financial technology startups emerging across the globe use blockchain platforms extensively to power digital currencies, strengthen transaction security and create decentralized markets.

Despite of the many unique selling proportions of blockchain technology, it still needs to get into the lines of robust payment platforms, such as SWIFT. To become a trusted partner of all the stakeholders involved in financial transactions, blockchain technology needs to evolve further to give the required confidence by assuring the security and reliability of the platform. The future success of blockchain implementations in the banking sector will be based on the trust that regulators and banks develop have in the blockchain technology platform.

Read Vimal Mani’s recent Journal article:
A View of Blockchain Technology From the Information Security Radar,” ISACA Journal, volume 5, 2017.

SSH:  Why You Need to Care

Secure Shell (SSH) is everywhere. Regardless of the size, industry, location, operating systems in use or any other factor, chances are near certain (whether you know about it or not) that it exists and is in active use somewhere in your environment. Not only does SSH come “stock” with almost every modern version of UNIX and Linux, but it is in a normative mechanism for systems administration, a ubiquitous method for secure file transfer and batch processing, and a standard methodology for accessing virtual hosts whether they are on premises or in the cloud. 

Because of this ubiquity, SSH is important for assurance, security and risk practitioners to pay attention to. There are a few reasons why this is the case. 

First, configuration. SSH can be complicated to configure, and incorrect or inappropriate configuration translates directly to technical risk and potential security issues. Why is it complicated? The configuration parameters border on the arcane, and they require knowledge of the underlying protocol operation to make sure strong selections are made. These configuration choices are highly dependent on both environment and usage, so what might be robust enough for one use case might be insufficient for another. Likewise, the client and the server (e.g., solid-state hybrid drives) have separate configuration options, and each option directly impacts the security properties of the usage. 

Second, usage. Usage tends to be niche and tends to grow organically over time rather than (usually) being “deployed” in a planned-out, systematic way. It is natural that this happens because the number of SSH users in the organization is relatively small (most consisting of operations folks), the tool itself is ubiquitous (coming as it does “stock” on multiple platforms and (because it is a security-focused tool) it is sometimes viewed with reduced skepticism by assurance and security teams. These factors serve to make it less “visible” from a management point of view, meaning very often, organizations do not systematically analyze potential risk areas associated with SSH, evaluate the security properties of their existing usage or otherwise systematically examine configuration and other parameters. 

Finally, it makes extensive use of cryptography. By virtue of how the protocol operates, cryptographic keys are integral to the protocol operation, and choices are available about how the cryptography operates, how keys are managed and distributed, and numerous other considerations. As we all know, managing cryptographic keys can be challenging and it is critical to get it right for the security of the usage to be preserved, and cryptography generally can be a subject area difficult to get right. 

For these reasons, it is important that organizations pay attention to their SSH usage the same way that they would any other technology that they use. There are some specific practical considerations that organizations should address and important questions to ask themselves around usage, configuration and maintenance of SSH. ISACA’s recent guidance Assessing Cryptographic Systems lays out the general considerations for assessing a cryptographic system, but specific considerations for SSH remain, for example, specific configuration options for SSH and key management issues specific to SSH. 

To help practitioners work through these issues, ISACA has published SSH: Practitioner Considerations. The goal of the publication is to give security, audit, risk and governance practitioners more detailed guidance about how to approach and evaluate SSH usage in their environments. 

Ed Moyle is director of thought leadership and research at ISACA. Prior to joining ISACA, Moyle was senior security strategist with Savvis and a founding partner of the analyst firm Security Curve. In his nearly 20 years in information security, he has held numerous positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers and senior security analyst with Trintech. Moyle is coauthor of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as an author, public speaker and analyst.

Tracking Vulnerability Fixes to Production

As an IT auditor at a software company, I discovered that security vulnerabilities in our bespoke product had not been getting released to clients on a timely basis. We had been doing penetration tests for years, but obtaining the penetration test report had not translated to the fixes being released to the users. Our clients remained exposed to known vulnerabilities, a situation that meant my employer was assuming all potential liability for the situation.

There were, it turned out, many things that slowed delivery of the fixes. Some factors were organizational and some were technical. I address the organizational challenges of client resistance and lack of internal commitment in my recent Journal article. But I will offer an additional insight for readers of Practically Speaking on overcoming technical complexities in patching a bespoke software product.

The technical complexities in the environment were nearly endless. Some penetration test findings applied only to certain versions of the software. Some fixes were beyond the capabilities of the development platform and required extensive software workarounds. Other fixes required a minimum browser level that was beyond a client’s reach. Sometimes a peculiarity of the client environment prevented one fix or another from working, e.g., a homebrewed single sign on or bespoke antivirus configurations could hamper the rollout of bug fixes. These complexities prevented the patch bundle from each year’s penetration test report from being deployed into the production environment.

The problem turned out to be both the vendor and the client believing each year’s findings could be resolved via a monolithic patch. By bundling the fixes, we greatly increased the likelihood that some complexity would render the patch unsuitable or undeliverable to a given client. Working with the developers, we devised a matrix that broke down each year’s penetration test results, with a row for each distinct finding in each report. We worked with the product owners to understand which finding applied to which version of the software. When a software fix was created, we recorded the version control branch number for the fix against the relevant finding. When a release was scheduled for a client, we worked with the project management office to ensure that the relevant fixes got scheduled.

It was messy and labor-intensive, but it worked. Supported by a strong version control system and a documented package release program, a reliable program for tracking penetration test fixes to production was put into effect. In time, we eradicated client exposure to known vulnerabilities, resolved our employer’s potential liability and were ready for each year’s fresh batch of findings.

Read Michael Werneburg’s recent Journal article:
Addressing Shared Risk in Product Application Vulnerability Assessments,” ISACA Journal, volume 5, 2017.

Equifax:  Too Soon for Lessons Learned?

I am sure most practitioners by now have probably heard about the Equifax breach. If you have not yet, get ready to hear about it nonstop—probably for the next year or 2 at least. Why? Because it eclipses even the 2013 Target breach (which people are still talking about) both in number of individuals potentially impacted (143 million) and the potential sensitivity of the records involved (which include social security numbers, dates of birth, addresses, credit card numbers and driver's license information.) 

The details of this are still unfolding, so we do not have the full picture yet. It will probably be a few months before we do. But in the meantime, I think we know enough to highlight at least a few lessons learned. Specifically, things that it behooves organizations to have in mind as they plan (and ideally exercise) their own incident response strategies. We can use what happened with Equifax as an illustration of why these principles are a good idea. 

Since it is early in the cycle, it bears noting that a grain of salt should be applied as we go through these. After all, emerging details might change our understanding of specifically what transpired. But in the meantime, there are a few items that, based on what we know right now, are useful takeaways for other organizations planning their own response processes.

Lesson 1:  Application Security

The first takeaway is based on what we know about the root cause. We do not fully understand what happened, but we do know that it was caused by (per Equifax) a “website application vulnerability.” It is unclear whether they mean an issue in the underlying web server (or supporting software) or an issue in the application running on it, but we know that organizations tend to struggle with application security—so I do not think anyone would be surprised if it is the latter. This event should, therefore, serve to underscore the importance of application security generally, i.e., having a robust development and release life cycle, “building security into” production applications, and the importance of robust testing of both applications and the underlying technical substrate upon which they reside. 

Lesson 2:  Optics

The second thing we can learn is about management of the optics during the incident response process and the breach notification process. Equifax is taking a bit of flak in the press and on social media because 3 key executives (including the chief financial officer and president of US operations) sold more than US $2 million worth of stock in August. That is after the breach was discovered internally, but before it was disclosed to the public. Equifax told the press that these executives had no knowledge of the breach at the time (because otherwise it would be illegal), but had they known, Equifax could have avoided what is bound to be a source of serious bad press for them in the coming months. 

This is why it is a good idea to foster and maintain clear, open and timely communication channels between all areas (including executives and legal counsel) as incident response events unfold. Additionally, the point has been made in the press that the free identity theft protection offered by Equifax requires those accepting that offer to give up their right to sue or participate in a class action. Those are not great optics either. Consumers are likely angry about this, so hanging out an offer “with strings attached” is potentially caustic. 

Lesson 3:  Encryption

It probably stands to reason that, had the information that was compromised been encrypted, it would have been included in the information made public about the breach. To the previous point about the optics, stating that the information was encrypted would defuse quite a bit of Equifax’s current public relations nightmare. Based on this, we can probably assume for now (though later facts might certainly indicate otherwise) that it is not encrypted. 

Encryption of data at rest is not difficult to deploy nowadays; that is true whether the data are structured or unstructured. So, if you have a database of millions of social security numbers, bulk storage of files containing financial information or any other situation that could be explosive from a privacy standpoint, asking yourself why those data are not encrypted is probably a useful question to ask. There are absolutely reasons where it can be challenging to implement encryption, but balance that against the explosive potential consequences of a large-scale breach. 

Lesson 4:  Be Alert to Deadlines

Equifax is also taking criticism in the press about the time that it took to notify impacted individuals. They discovered the breach on 29 July, but we are only learning about it now. Keep in mind that some jurisdictions have a specific “clock” by which you must notify customers (e.g., Florida, USA, is 30 days—or 45 with an extension). Of course, law enforcement may direct an organization to delay that notification (we do not yet know if that was the case in this situation or not), but it is helpful to take these deadlines into account, include them in incident response planning and put mechanisms in place to make sure they are followed during an incident when stress levels are high.

There are likely numerous other lessons that will surface as events unfold. If so, there are likely numerous other learning opportunities that will surface. We will just need to watch, wait and analyze them as they come. 

Ed Moyle is director of thought leadership and research at ISACA. Prior to joining ISACA, Moyle was senior security strategist with Savvis and a founding partner of the analyst firm Security Curve. In his nearly 20 years in information security, he has held numerous positions including senior manager with CTG’s global security practice, vice president and information security officer for Merrill Lynch Investment Managers and senior security analyst with Trintech. Moyle is coauthor of Cryptographic Libraries for Developers and a frequent contributor to the information security industry as an author, public speaker and analyst.

ESA:  What Is It and How Does it Work?

Rassoul Ghaznavi ZadehEnterprise security architecture (ESA) is the methodology and process used to develop a risk-driven security framework and business controls. The focus of an enterprise architect should be to align information security controls and processes with business strategy, goals and objectives.

Normally, developing an effective ESA is achieved following these steps:

  • Defining the business’s goals and objectives
  • Understanding business risk and threats
  • Understanding compliance, regulation and legal requirements
  • Identifying the appropriate framework and architecture vision
  • Identifying the appropriate security controls (gap analysis)
  • Managing and implementing the security controls
  • Monitoring and evaluating the security controls
  • Assessing and identifying gaps before repeating the cycle

The previously mentioned steps are considered a part of ESA life cycle management. It is important to note that ESA is not a one-off task but a continuous process.

ESA Life Cycle Management

Guidance on How to Choose Architecture Framework and Controls
Consider the following steps when selecting a framework:

  • Pick a framework that is relevant to your business and applicable regulations (e.g., US National Institute of Standards and Technology [NIST] Cybersecurity Framework, International Organization for Standardization [ISO]/International Electrotechnical Commission [IEC], COBIT).
  • Customize the controls to fit your business’s purpose and align them with goals and objectives. Make sure all business risk and threats are managed with appropriate controls. Tune and finalize the framework and document the requirements

Guidance on Business Risk Identification
Business risk identification is a fundamental part of setting up an architecture. One way to identify business risk is to look at current threats to your business goals and objectives.

However, I suggest you start your business risk identification with business attribute profiling. Business attribute profiling is a useful concept introduced by the SABSA framework and can be used to identify business risk.

To begin your business attribute profiling, you need to identify all attributes that are important to your business. For example, you may find that industry regulation compliance, assured customer privacy and assured customer satisfaction are important. Once you have established the important attributes for your business, you can find the risk associated with each corresponding attribute. 

Guidance on Gap Analysis
Gap analysis needs to be performed to identify the requirements to progress the current architecture to the desired architecture. Normally, maturity models, like the Capability Maturity Model Integration (CMMI), can be used to identify the current level of maturity for each control and their respective required level of maturity. After this is established, a relevant migration plan can be created and implemented.

Read Rassoul Ghaznavi Zadeh’s recent Journal article:
Enterprise Security Architecture—A Top-Down Approach,” ISACA Journal, volume 4, 2017.

Questions to Ask When Selecting an ITIL Automation Tool

Ram MohanOne of the main tasks of the Information Technology Infrastructure Library (ITIL) implementation process is choosing an ITIL automation tool. Hence, while embarking on the IT service management (ITSM) automation journey, we should not rush into implementing a tool, even if the supplier claims that the tool has pre-built ITIL processes. First and foremost, we need to identify the existing gaps and the maturity levels in the 3 major domains, namely people, process and technology. The priority of these 3 domains should also be people, process and technology, respectively. Second, we need to ask specific questions for each of these 3 domains.

Questions that need to be answered in the people domain are:

  • Does your existing organizational structure have adequate staffing in all ITIL-relevant areas? This includes help desk, desktop support, system administration and database administration.
  • Are the job descriptions clear?
  • Does each function have standard operating procedures (SOPs)?
  • Is each function exposed to ITIL processes?

Questions that need to be answered in the process domain are:

  • Does your organization have well-defined policies on all core processes, such as incident, change, release and service level management?
  • Is each process defined in detail with work flows, notifications and escalation paths?
  • Does each process have process owner(s)?
  • Does each process also have a responsibility and accountability chart (RACI) for each activity within the process?
  • How does your organization measure process efficiency? Have you defined the key performance indicators (KPIs)?
  • Does your organization have well-defined service level agreements (SLAs)?
  • Does your organization have well-defined business services and service catalogues?

Questions that need to be answered in the tools domain are:

  • Does the proposed ITIL tool support ITIL-certified processes?
  • Considering scalability and flexibility, can the proposed ITIL tool provide a cloud-based solution?
  • Is the proposed ITIL tool modular and does it allow for implementation of specific processes?
  • Does the proposed ITIL tool provide a strong configuration management database (CMDB) to manage the IT asset and configuration management?
  • Does the proposed ITIL tool support a standard database (Oracle, SQL)?
  • Does the proposed ITIL tool provide strong analytics to generate dashboards and KPI reporting?
  • Does the proposed ITIL tool have functionality to support context-sensitive multi-channel self-service for users on mobile, Internet or tablets?
  • Does the proposed ITIL tool include a unified portal covering self-service requests, knowledge management and dashboards?
  • Does the proposed ITIL tool have flexibility in licensing models with named users (for high volume users) and floating users (approvers, low volume processes)?
  • Does the proposed ITIL tool provide automated discovery tools to gather IT asset configuration data?

Finally, ensure all key stakeholders, including business management, technical support teams and call center employees, are a part of the project team.

Read Ram Mohan, Mathew Nicho and Shafaq Khan’s recent 2-part Journal article:
Challenges and Lessons Learned Implementing ITIL,” ISACA Journal, volume 4, 2017.

Obtaining Accurate HTTPS Posture Information

There are far more ways to apply encryption incorrectly than there are ways to apply it correctly. Sadly, many people think they already know everything they need to know about encryption because they have read a few articles online. Recently, I published an article in which I discuss methods for assessing your HTTPS posture. While I was specifically focused on internal systems where you have some degree of control or are obligated to inform those who do have the degree of control, it is also extremely important not to overlook the necessity of performing the same type of assessment against vendor solutions.

Many times, I have pressed vendors for details regarding security only to receive the responses, “I do not have the information,” or my personal favorite, “It is encrypted.” Not having the information is inexcusable, and responding with, "it is encrypted" is arguably even worse. It implies they cannot articulate the details and they hope that you simply nod your head and not ask any further questions.

When considering HTTPS posture, there are a few key points to keep in mind. While these points do apply to internal configurations, they especially apply to vendor-provided solutions and information:

  1. Question everything. "It is encrypted" is a class of answers, not an answer on its own. The question is how it is encrypted. It is akin to someone asking what you like to eat and you replying, "food."
  2. If a vendor ever says that they use proprietary encryption (I am still shocked at how often I encounter this), it is a very bad sign. It borders on a statement of ineptitude.
  3. Evaluate what is meant when an enterprise says, "we do not share that information." Similar to the previous point, this statement implies the vendor does not fully understand the subject. If they were applying encryption correctly, they would proudly proclaim the details to anyone who asks.
  4. Ensure weak ciphers and protocols are explicitly disabled. It is not uncommon for a vendor to say something like, "we use TLSv1.2," and while this is ideal, it does not reveal  what else is enabled. For example, using TLSv1.2 but leaving SSLv3 enabled largely defeats the purpose.

The previous points will help drive out the true encryption details. By clarifying these details, , the level of security not only increases, but the level of understanding of how security is implemented also increases.

Read Kurt Kincaid’s recent Journal article:
HTTPS Posture Assessment,” ISACA Journal, volume 3, 2017.

Using Hackers’ Own Tools Against Them

Danelle AuThere is a certain satisfaction that comes from turning the tables on a seemingly unbeatable adversary. Luke Skywalker exploited a design flaw to destroy the Death Star. Rocky Balboa exploited Ivan Drago’s arrogance to win a boxing round. Sarah Connor exploited a reprogrammed Arnold Schwarzenegger to beat the T-1000 in Terminator 2.

In cyber security, the hacker community often seems as evil as Darth Vader, as cold as Ivan Drago and as relentless as the Terminator. It would be nice if there were a way to turn the tables and beat hackers at their own game.

Whether for financial gain, social activism, mischievous vandalism or other malicious motivation, hackers have been exploiting weaknesses in human nature and network defenses and making life miserable for enterprises for decades. But today, security professionals are starting to turn things to our advantage. By amassing a knowledge of the millions of techniques known to be used by hackers and combining that information with real-time threat intelligence and continuous, automated vulnerability testing, it is possible to beat hackers at their own game.

Imagine, as in The Terminator, that you could see how an adversary attacked you, understand the weaknesses they would exploit, quantify which security defenses were failing and then go back in time to fix the problem before it happens. That is what we are talking about. And it is not a point-in-time assessment that may be valid today and obsolete tomorrow. It is a constant process based on up-to-the-minute analysis and intelligence.

It is important to use breach simulations to “breach your own castle.” It is a process that ensures not only that your investments in cyber security are calibrated to meet the specific needs of your enterprise, but it also creates a sort of incident response muscle memory that ensures a timely, efficient response when an attack does take place.

My recent Journal article goes into detail about my company’s approach, but to improve our industry’s readiness and efficacy, we believe in sharing information and in having a robust dialog that challenges assumptions and improves processes. In that spirit, I look forward to reading and responding to your comments.

Read Danelle Au’s recent Journal article:
Breach Your Castle for Better Security,” ISACA Journal, volume 3, 2017.

1 - 10 Next