Cyberincidents involving ransomware are a common occurrence lately. Hardly a week goes by without hearing about an incident in the news. Some involve an organization paying a ransom to get access to files, and others involve enterprises deciding not to pay and dealing with sometimes costly and protracted recovery processes. Paying a ransom, as tempting as it might be to regain access to files, creates a societal negative externality.
Negative externality is a term used by economists to describe a condition in which a third party suffers a cost as a result of a transaction. One common example is a factory dumping toxic waste into a river: A third party, people who live downstream from the river, are harmed from the economic exchange between factory owners and those who buy the goods the factory produces. A technology example, and the primary focus of my ISACA Journal article, titled “The Downstream Effects of Cyberextortion,” is paying ransomware. There are 2 parties in the transaction—the cybercriminal and the victim, and every time a victim pays a ransomware demand, cybercriminals are emboldened, enriched and encouraged. Paying the ransom creates more future victims, therefore creating a negative externality.
Common advice is often “Never pay!” This might be good guidance if one wishes to improve the overall computer security ecosystem, but is this good advice for the small community hospital that does not have good backups and where lives may be at stake? This is the question—and decision—that I analyze in the article. Thinking about this problem as a series of decisions helps frame the problem, identify risk and identify opportunities in which cybersecurity professionals can disrupt or influence the decision. If one is faced with this kind of problem, the decision flow can be broken down into these 3 high-level steps:
- Restore from system backups; if backups do not exist, follow step 2.
- Obtain assistance to decrypt the files without paying the ransom (e.g., security consulting firm, the No More Ransomware Project); if unsuccessful, follow step 3.
- Decide whether to pay the ransom or deal with data loss.
I also briefly touch on the nudge theory. Nudge theory has been explored in the field of behavioral economics and describes ways that actors can be nudged into good decisions without government interference, coercion, etc. I believe the nudge theory can be very effective in helping solve the ransomware problem. Some possibilities are:
- Helping smaller firms with preventative measures, such as patching and other security basics
- Pro bono or low-cost response assistance: negotiating with cybercriminals, forensics, data restoration
- Encouraging projects that develop decrypter kits such as the No More Ransomware project. It might be worthwhile to set up a bug bounty pool, funded by corporate donations, that pays independent security researchers to develop countermeasures to ransomware strains.
Let us continue the discussion in the comments section. Do you find this type of decision analysis useful? Can it help solve common cybersecurity problems? How would you nudge people to make better decisions?
Read Tony Martin-Vegue’s recent Journal article:
“The Downstream Effects of Cyberextortion,” ISACA Journal, volume 4, 2018
With so many compromises leading to data breaches, one common concern is even after so much investment going into technology, people and processes, why are breaches occurring? Are we “barking up the wrong tree”?
Perhaps, yes. Today there is a different challenge that security professionals are faced with: where to focus and what to protect. The traditional approach of protecting everything is failing; focus and effort should be on critical assets.
Knowing what to protect is extremely relevant for deciding the level of security protection required. The asset could either be raw data or processed information along with the ecosystem (e.g., operating system, application, web, data or application programming interface [API]). Lack of visibility to this key and critical piece of information leads to:
- Excess security focus on irrelevant assets
- Deficient security focus on critical assets
- No security focus on critical assets
Is there a well-designed and sustainable approach to identify and protect assets based their criticality and risk exposure?
The solution is to analyze the end-to-end (creation, storage, transmission, access and archival/destruction) data flow once the activity is completed a create a detailed blueprint of the data life cycle, including information on:
- Gateways (entry and exist)
- User roles and access rights
- Upstream and downstream information flow
- Upstream and downstream interface protocols
- Internal and external connectivity
- System and platform
- Implemented security controls
- Storage location and type (transient or permanent)
The previously mentioned information will help in aligning the required focus and effort for designing, implementing and monitoring security measures. This approach can easily be adopted for blueprinting all existing data/information assets. Having a data life cycle blueprint will be beneficial for:
- Providing a clear visibility on data assets for faster design decisions and having a clear overview of all impacted components
- Providing a quick overview of controls to be added due to a changing threat environment, regulation or incident
- Enabling investigators with required information at a glance during an incident
- Providing field-level information along building blocks
Read Sridhar Govardhan’s recent Journal article:
“Data Spill Lessons From the Oil Industry,” ISACA Journal, volume 4, 2018.
In the last few years, SWIFT has become a favorite target for hackers across the globe. The frequency of SWIFT-targeted cyberattacks is a good indicator of the same. In most of these SWIFT-targeted attacks, the network perimeter was compromised before the core SWIFT platform was touched. It is first important to ensure that we have a foolproof network perimeter built around SWIFT infrastructure with appropriate security solutions in a defense-in-depth manner.
Data confidentiality in SWIFT can be achieved through the encryption of all payment-related data and having all links controlled by SWIFT using strong encryption algorithms. Access to SWIFT payment data should be protected by means of one-time passwords (OTP). Controls such as unique sequencing of all messages, dual storage, real-time acknowledgement to the user, and message authentication procedure between the sender and receiver also help ensure SWIFT data integrity by protecting from fraudulent modification of SWIFT data, which was the technique used by hackers in many recent SWIFT-targeted attacks. Availability of SWIFT infrastructure can be achieved using several measures, many of which are built into organizations in the form of continuity planning, duplication, and, in some cases, triplication of equipment, extensive recovery schemes and automatic rerouting of payments in the event of failure of some network nodes.
In addition to the confidentiality, integrity and availability-related controls mentioned previously, having controls, such as well-defined segregation of duties, logical access controls, control of paper output and timely validation of error reports, helps protect the SWIFT infrastructure across the Cyber Kill Chain.
An assurance that an optimum level of SWIFT security has been achieved needs to be provided by execution of well-defined internal and external audit programs on a periodic basis.
Read Vimal Mani’s recent Journal article:
“Securing the SWIFT Infrastructure Across the Cyber Kill Chain,” ISACA Journal, volume 4, 2018.
When faced with an obstacle, how do you take the first step? I have found it helps to follow the steps outlined in Lisa Avellan’s article “Five Simple Steps When You Don’t Know Where to Start”:
- Breathe and relax
- Make the best decision
- Act immediately
Today’s obstacles in business are typically around managing information security and the growing cyberthreats. As you are faced with security obstacles, these 5 steps can help:
- Breathe and relax—The scope and complexity of an assessment can seem stressful and overwhelming at first. Take a breath, relax and begin to tackle it step by step. You will find the actual process to be less agonizing then at first assumed.
- Prioritize—I recommend that you start by conducting an assessment. Assessing the risk and gaps in your information security structure will help you identify what type of information is stored, how it is transmitted and accessed, and determine what risk poses possible threats to the information. The risk assessment enables you to identify hazards and risk factors that could cause harm, analyze and evaluate these hazards, and determine the best course of action to mediate the harms and risk.
- Make the best decision for your organization—As I outline in my recent Journal article, every organization has different needs—some may need a complete overhaul, while others just need a tune-up. There are a number of different approaches to assessing the security needs of your organization. A risk assessment helps you to determine your security needs to mitigate risk. A gap analysis helps you to find the holes. A security audit is an extensive overview of an organization’s security systems and processes and helps you determine specific security needs.
- Act immediately—No need to panic! Since the assessment precedes your proactive security efforts, it is important that you first take inventory. An effective risk assessment is the foundation of an effective risk management program. Regular assessments are important to the success of any business and form the foundation of an effective IT risk management program. If you are looking to improve your security posture and boost your compliance, risk assessments and gap assessments are the key to continuous improvement and well-informed leadership decisions.
- Evaluate—Think of an assessment as a way to evaluate where you are. For example, a risk assessment is about gathering data, determining threats, analyzing risk factors and prioritizing to determine mitigation.
When it comes to managing information security, I would add a sixth step to Avellan’s list: breathe and repeat. Repeated assessments and tests allow for continuous, targeted improvements that allow for optimal risk mitigation over the long term.
Read Tyler Hardison’s recent Journal article:
“Building a Strong Security Posture Begins With Assessment,” ISACA Journal, volume 3, 2018.
While some cybersecurity teams may be anxious to get involved with master data management (MDM), there are prerequisites that we strongly recommend be in place prior to starting down the implementation path. Having a well-defined software development life cycle (SDLC) in place is important. Even more important is that adherence to the SDLC be institutionalized. Tied into this is the architecture review board, which should be reviewing all significant changes or new implementations of data, systems, technology, etc. These 2 processes should be addressed in the information security policy and, where applicable, the data governance policy.
With these building blocks in place, the following steps will get you started mapping a data protection plan that can be outlined in a governance standard document and referenced in your company’s information security policy and data governance policy:
- Step 1—Identify and document data owners for governance decisions. Ask the business to identify who can make decisions regarding data retention, data destruction, data classification, disaster recovery and business continuity planning.
- Step 2—Validate with the IT team their responsibilities for providing the hardware, operating systems, software patching, maintenance and systems support. Follow this by asking what disaster recovery plans are in place. If there is a discrepancy between disaster recovery needs and documented disaster recovery plans, bring the business and IT teams together to resolve and record the details. The same goes for any associated business continuity plans.
- Step 3—Develop a detailed document regarding the standards and procedures for access control, logging and monitoring, privileged access management, and compliance guidelines for backup data retention and any other relevant processes. It is an imperative that the cybersecurity team holds a seat on the architecture review board to ensure the identification of sensitive or protected data and to recommend the appropriate protection level.
- Step 4—With the appropriate cybersecurity training, authorize the MDM staff to act as cybersecurity deputies owning the guardianship of data sources, data access and data egress. The MDM team also needs to maintain the data map that documents MDM data storage and flows.
- Step 5—Institute quarterly meetings between the cybersecurity team and the MDM team to review the configurations of all related data tools ensuring access is appropriately assigned.
- Step 6—Of great importance, user access reviews should be instituted for all data flows. This is typically done by performing quarterly access reviews for the applications that interact with MDM. We suggest assigning this task to each application team. Then turn it over to internal audit team for their review.
- Step 7—In organizations where data loss prevention (DLP) software can be funded, we recommend its implementation because it adds real-time, preventative control for keeping data secure.
In the process of implementing the previous list, the cybersecurity team should perform the governance role of defining the levels of security for each data type based on its classification (e.g., public, confidential and restricted).
Ensure that your classification names align with your company’s documented management terms and that they are congruent with the corporate document management definitions.
It is important to outline which data require encryption during transmission, what data require encryption at rest and what data requirements apply if the data are transmitted to a 3rd party. Within this guidance, cybersecurity also sets the standards for compliance, which should include considerations for Payment Card Industry Data Security Standard, General Data Protection Regulation, personally identifiable information, the Health Insurance Portability and Accountability Act, etc.
Read Sonja Hammond and Chip Jarnagin’s recent Journal article:
“Cybersecurity vs. Master Data Management,” ISACA Journal, volume 3, 2018.
Privacy and security are issues society struggles with on a daily basis, both in our private lives and in our work. We all strive to be happy, and safety is an important but an uncertain factor in our lives. When I was younger, I worked in prison, where I felt safer than I do these days on the Internet. In prison, there was insight into the threat landscape and the measures you had to take when threats occur. It was clear and visible. You simply had to press a red button and a guard or fence was there to protect you. The Internet, on the other hand, is complex, invisible and difficult to handle. There is a sense of urgency to have information security in place, but often one has no idea how to do this.
It is no longer a question if, but when, an organization will fall victim to a cyberattack. It is against this background of increased opportunity for information security breaches and heightened awareness of the repercussions of such breaches that organizations are seeking to protect their information and minimize the risk of possible damage resulting from a breach.
We observe an increase in awareness that adequate business information security (BIS) is needed, but with the increasing complexity of information security, it is important to ask ourselves how we can apply BIS effectively. The aim of our Journal article is to establish a core set of critical success factors (CSFs) that organizations can take into account when establishing a security strategy or implementing an information security program. We certainly tried to provide fresh and new insights in the CSFs needed to implement an effective business information security strategy. One of these CSFs is to “never waste a good security incident” and use it to accelerate.
Read Yuri Bobbert and Talitha Papelard-Agteres’ recent Journal article:
“Never Waste a Good Information Security Incident: An Explorative Study into Critical Success Factors for the Improvement of Business Information Security,” ISACA Journal, volume 3, 2018.
In one of my recently published ISACA Journal articles, “Clash of the Titans: How to Win the ‘Battle’ Between Information Security and IT Without Losing Anyone,” I pointed out some of the challenges the chief information security officer (CISO) faces when it comes to prioritizing information security interests over IT interests. Although my insights refer mainly to finding common ground with the IT and infrastructure departments, at times the CISO needs to find other resources and common interests with other units to either “finance” the CISO’s solutions or implement the CISO’s policies.
One of the CISO’s natural partners is the chief risk officer (CRO), and this partnership should be nurtured and adopted by all information security members as well. Given that the CRO is accountable for enabling effective governance of significant risk while balancing risk and reward, cyberthreats and information risk should be a top priority. These threats can potentially impose not only technological risk but can also easily lead to regulatory, reputational, competitive risk and more. This formulates an intrinsic collaboration between the risk management (RM) function and the information security function within the organization.
That said, this collaboration between information security and RM must take place naturally and not only when there is a business deadlock or a disagreement between IS and IT. Nevertheless, I argue that the mutual interests also apply to the CIO and the IT division as well. In my opinion, this “triumvirate dynamic” is a win-win-win situation for all parties—information security, IT and RM—as Illustrated and explained in the following model:
- CISO and CRO
- The CISO needs the CRO to support governing information security interests.
- The CRO needs the CISO to operationally identify and mitigate cyberthreats and risk.
- CISO and CIO
- The CISO needs the CIO’s IT services and resources to comply with policies.
- The CIO’s interest is to deliver accountabilities from the CIO’s human resources to be taken under the CISO’s responsibility.
- CRO and CIO
- It is in the CRO’s interest that each division will comply with relevant regulations.
- The CIO seeks the CRO’s understanding regarding resources constraints to prioritize projects and operations.
Figure 1: The Triumvirate Dynamic
Read Ofir Eitan’s recent Journal article:
“Clash of the Titans: How to Win the ‘Battle’ Between Information Security and IT Without Losing Anyone,” ISACA Journal, volume 3, 2018.
The majority of modern organizations have embarked on the path security operations centers (SOCs) are building. Today, the SOC is not a modern trend; it is a forced restructuring and reorganizing of existing information security or cybersecurity departments. An SOC is a set of staff, processes, technologies and facilities that are primarily focused on identification (detection) and response to cybersecurity incidents, which arise as a result of cybersecurity threat realizations.
From the management point of view, a use case within SOCs is a mechanism for consistent selection and implementation of cybersecurity incident detection scenario rules, tools and response tasks. From the practical (technical) point of view, a use case is a specific condition or event (usually related to a specific threat) to be detected or reported by the security tool.
The main component of use cases is a cybersecurity incident detection scenario rule (i.e., a correlation rule), which includes:
- Syntax of the rule within a specific security information and event management (SIEM) system
- Event source (any software or firmware [tools] that have logging capability and the ability to provide access to log data)
- Event category or accurate recorded event (log data)
However, there is a bad news: There is no full event category catalog. Therefore, it is necessary to prepare a separate event category or accurate recorded event (log data) list (catalog) for every event source. This is a great challenge within a large IT infrastructure because, on the one hand, this catalog must be focused on cybersecurity (but it is clear only for cybersecurity staff) and, on the other hand, the catalog must be focused on specific event sources (but it is clear only for IT staff)!
Experience has shown that without an event category catalog, it is very difficult and sometimes impossible to carry out the design of a use case and, consequently, next steps within SOCs.
My recent ISACA Journal article addresses an existing challenge and provides a suggested catalog for popular event source types (e.g., operating system, database management system, web server, mail server, application software, router, firewall, antivirus tool).
Read Aleksandr Kuznetcov’s recent Journal article:
“Event Category Catalog for SOC Use Cases,” ISACA Journal, volume 3, 2018.
I want to take this opportunity to dive a little more into the metrics that come out of an access certification program. One of the greatest joys in life is when you have enough data that you can identify patterns and trends in your certification program to monitor the health of your access controls. I can certainly identify plenty of research and articles on the textbook way to manage privileged access or to set up access control and role-based security. However, I have found very little research or guidance to indicate what “good” results are for an access certification.
This is one opportunity where you can use a data-driven approach in determining the health of your controls. Many organizations will see a slew of changes from their access certifications, despite front-end controls to add and remove access that are perceived as effective. Of course, in some situations this could be by design. Maybe the changes found are low risk, such as view-only access to data that are not confidential. However, a larger number of changes (more than 5%) usually indicates opportunities for improvement.
To generate more conversation around what good access certification metrics look like, I am inviting others to share their experiences and access certification metrics, and whether they feel the metrics indicate a positive or negative trend for their access controls. During your last certification event, what percentage of entitlements were removed? How would you characterize your front-end provisioning and deprovisioning/termination controls? Does the percentage of entitlements removed match up to the strength of the front-end controls? If you have strong controls, the entitlements removed should be low. If you have weak controls, the number of entitlements removed should be larger.
Do you have strong controls and the results of access certifications are bearing that out? Then consider doing certifications on a less frequent basis. Your users will appreciate it!
Read Vincent J. Schira’s recent Journal article:
“Rethinking User Access Certifications,” ISACA Journal, volume 2, 2018.
When developing an information security architecture framework in a new organization, there are a few steps that normally have to be taken to identify the business requirements, the right framework and the controls needed to mitigate/minimize business risk. In my Journal article, I explained the process of how this works.
Once the controls are identified, it is time to create projects and implement them. This might not be a big issue when dealing with a mature company that already has many of controls in place and only needs a few additions. However; this could be challenging when the number of projects and controls increase. The question is how to prioritize these projects and controls and implement the most important ones first.
To prioritize these tasks, we use a risk-based approach utilizes the enterprise risk register. When developing security architecture controls, they must have a one-to-one relation with business risk, otherwise the controls are irrelevant to the business. Using the same approach, we can identify the impact on the business if one particular control is not in place and prioritize the controls based on their business impact.
As an example, assume we have the following controls identified as needing to be implemented:
- Web application vulnerability management
- Endpoint malware protection
Assuming business-critical data are hosted on the database and accessible by the web application serving customers, the relevant risk for the first one could be “data loss” and for the second one “IT operation failure.” We can add likelihood of occurrence to this as well and calculate the overall risk.
Figure 1 shows the risk calculation for this scenario and, as you can see, web application vulnerability management takes priority based on the overall risk ranking. (Is this a surprise?)
Figure 1—Risk Calculation for Controls
Relevant Business Risk
Relevant Information Security Risk
Business Risk Score/Impact (1-5)
Information Security Risk Score/Likelihood (1-5)
Overall Risk Score
Web application vulnerability management
Critical data loss
Endpoint malware protection
IT operation failure
In summary, the previously described process would help prioritize security architecture controls. Note that when dealing with operational risk, the process might be a bit more complicated, and a risk management framework should be followed.
Read Rassoul Ghaznavi-Zadeh’s recent Journal article:
“Information Security Architecture: Gap Assessment and Prioritization,” ISACA Journal, volume 2, 2018.