Jeimy J. Cano M., Ph.D., CFE, CMAS
Compliance functions seek to build and develop culture, anticipate risk and ensure the operation; that is, they verify that things are accomplished in accordance with established practice. In this sense, understanding how organizations attempt to maintain a reasonable and satisfactory level of assurance involves developing a culture of compliance that can withstand the convenience and comfort factors and seeks to promote a culture of sustained efforts.
The failure of a control activity in the exercise of an office or position opens up the possibility of a chain of bad practices that may trigger an incident, to a greater or lesser degree, depending on the impact in the context of the activities’ business and its sensitivity with stakeholders and shareholders.
In this sense, information insecurity, as a motivator of compliance actions, plays an important role in the performance of assurance since permanently monitoring asymmetric movements allows us to study and refine the techniques to mitigate new and emerging risk and prepare organizations to increase resistance to failures.
Read Jeimy Cano’s recent Journal article:
“Information Security—Motivator of Corporate Compliance Practice,” ISACA Journal, volume 6, 2013.
Frank Bezzina, Ph.D., Pascal Lele, Ph.D., Ronald Zhao, Ph.D., Simon Grima, Ph.D., Robert W. Klein, Ph.D., and Martin Hellmich, Ph.D.
The specific objective of Basel III is to take into account the impact of operating risk management on value creation capacity, thereby allowing enterprises to anticipate and cover counterparty risk (i.e., the risk when the counterparty of a transaction fails to meet its obligations or when it might be incapable of meeting the obligations before the fulfilment of a transaction).
The challenge for counterparty credit risk (CCR) entities is to schedule performance on the basis of the deposit of potentially recoverable operational risk losses (the source of cost savings) and to process in real time the indicators of productivity.
Through gap analysis, investor relationship management (IRM) modules provide weekly calculations of cost savings realized on each of the indicators, factors or causes at the origin of operational risk losses (absenteeism, quality defects, occupational accidents, direct productivity and skills gaps).
The efficiency of the proposed IRM system is based on the permanent link between internal control functions (finance, human resources and operations management), constituting the global structure of enterprise risk management (ERM), and the consideration of five indicators (and not only the most worrying) in the calculations of the created value and the variable remuneration.
The aim of IT-directed IRM is to feed the information system on which the internal controls of a firm rely in order to analyze financial risk with richer financial management data.
The pricing of assets is known to be a major difficulty for investors (banks, insurance firms and financial markets). In the absence of operational risk data, the prudent financial analysis model that prevails is one with weak effectiveness. This concept characterizes information emerging from the observation of past income statements or past stock market prices. An examination of the past asset profits is useful in planning future profitability.
The utilization of expected loss data and of cost savings, bound with the CCR’s appetite for operational risk, allows financial analysts to treat the assets in line with International Financial Reporting Standards (IFRS) and US Generally Accepted Accounting Principles (GAAP)—elements on which firms depend for future economic and competitive advantages.
IT-directed IRM provides reports that enable investors to reach this objective. In particular, it supplies in mathematical modelling tools (of financial modelling and economic modelling) the data of endogenous interaction of operational risk associated with the CCR for the calculation of the ratios of generalization or for the macroeconomic projections of long-term provisions. The data provided are particularly useful for updating the risk, especially when the financial and social quality of the CCR is deteriorating.
Read Frank Bezzina, Pascal Lele, Ronald Zhao, Simon Grima, Robert W. Klein and Martin Hellmich’s recent Journal Online article:
“The Value in Using IT-directed Investor Relationship Management," ISACA Journal, volume 6, 2013.
Antonio Ramos, CISA, CISM, CRISC, CCSK
Imagine that you have decided to buy a car. And, of course, safety is really important to you so you will take into account safety characteristics in your buying decision.
The most direct way of knowing the safety characteristics of every model is asking how many NCAP
stars each holds. The Global New Car Assessment Programme (Global NCAP) conducts independent research and testing programs that assess the safety and environmental characteristics of motor vehicles and their comparative performance and disseminates the results to the public. Those models with better crash protection and avoidance systems get more stars—5 stars being the best.
Does this mean that it is impossible to suffer injuries in case of an accident? No, of course not. Does it mean that your risk level is always lower when you drive a 5-star vehicle? No, again. Your risk also depends on other factors (e.g., your driving style, weather conditions).
Then, why do we look for the number of NCAP stars before purchasing? We look for NCAP stars because, under the same conditions, we are likely to be safer/suffer fewer injuries in a car with more stars.
The described rating system assigns every ICT service with a label, depending on the security measures it implements, the general conditions of the vendor and the resilience mechanisms in place. In my opinion, these labels should provide information about the three dimensions of security (confidentiality, integrity and availability), because users requirements could be completely different in each and the label has to provide enough information for users to make better decisions on what service to buy (if they want to consider information security in their decision).
So, when you look for an ICT service, it should be easy for you look at its security label and know which service offers better security conditions.
Adesanya Ahmed, CRISC, CGEIT, ACMA, ACPA
Many people use mobile devices for everything from communicating, collaborating and playing games to shopping and surfing the Internet. In fact, it is nearly impossible to pry the phone from some people’s hands. The shift to mobile devices offers thrilling new ways for businesses to engage with customers, improve employee efficiency and increase cost-effectiveness. Yet, many companies are struggling to take advantage of these opportunities as they worry about security and privacy.
As more smartphones, tablets and other types of mobile devices make their way into employees’ hands, requests for corporate access from those devices are increasing, representing a huge challenge for IT departments. Not only has IT lost the ability to fully control and manage these devices, but employees are now demanding that they should be able to conduct company business from multiple personal devices. Initially, businesses were resistant to this idea due to security concerns. However, IT teams are slowly and carefully adopting the concept, as they weigh concern about the inherent risk in allowing personal devices to access and store sensitive corporate information.
With adoption of a Bring Your Own Device (BYOD) strategy, IT is being transformed, offering a revolutionary way to support a mobile workforce. The first wave of BYOD featured mobile device management (MDM) solutions controlled the entire device, while the next wave of BYOD solutions applies only to those apps necessary for business, enforcing corporate policy while maintaining personal privacy.
It is clear that significant business risk is associated with BYOD. Organization should consider the following when planning BYOD strategy:
• Style of business—Some organizations or business processes have a natural fit for mobility. For example, some businesses have distributed offices with staffs that perform a number of tasks at other locations with sufficient levels of employee trust to encourage home working. Other businesses’ employees need to be based in certain premises (e.g., hospitals, university campuses, manufacturing plants) to use particular facilities. The mobile needs of businesses operating outside of the organization’s premises will be different than those within the physical perimeter.
• Alignment of mobile computing with existing IT—Existing strategies and policies for general IT must be taken into account. They may be there for security, compliance or good governance reasons. These should not be destabilized by a mobile strategy.
• Balance of in-house vs. outsourced—What capabilities are there in-house and what would be better brought in from outside? Most organizations are trying to perform a primary task where technology is a supporting tool, so it is often far more effective to outsource noncore activities to specialists.
• Define a mobile policy—Mobile policies should set out the critical mobile aspects to be managed and the guidelines for their management. These should include those areas oriented around people, commercial interests and contracts, as well as those associated with technology and security. Businesses should assess the following policy areas: mobile policy management, asset management policy, geolocation with mobile representational state transfer (REST) application programming interface (API), training and policy management, and organizational management.
Buck Kulkarni, CISA, CGEIT, PgMP, ACP
Many organizations have a well-established practice of conducting postimplementation reviews and/or audits of major projects to verify the business value delivered as well as performance against key parameters. The current reviews are largely aligned to the project process and, thus, have a built-in bias toward sequential processes, e.g., integration testing begins only after unit testing is signed-off, defects flow down the waterfall, money and time flow with the waterfall. The so-called “iron triangle” of time, cost and scope largely provides the framework for review.
Agile is becoming a preferred way of developing software and it brings some new paradigms for the auditor. The new drivers of scope are the product vision, product road map, release plan and iteration plans, and not the traditional, relatively easy-to-understand “frozen requirements.” The concept of feature prioritization is largely driven by the business, though IT may provide inputs and comments. There are somewhat ambiguous (at least initially) concepts, such as user stories, story points, done-done, velocity and deployment frequency as a metric for success. The project (and its outcome) is not as discrete as it used to be, but is just a point on the continuum that is expected to deliver consistent, measurable business value over a longer time horizon, say 3 to 5 years.
Consider the financial performance measurement. Traditionally, we tied scope and money and they had to change in tandem. As the agile practitioners like to say (and is music to the ears of business folks), “We welcome change.” So, to what do you allocate money? The team will tell you that they are focused on achieving their story points and velocity goals. The auditor will have to understand how these relate to money. Similarly, time is now a box of a set number of resources working for a set amount of time (such as a sprint) and what they will be developing in their next sprint will not be known until the sprint begins.
Consider the additional paradigms of lean and just-in-time. Was the project truly lean? Was waste minimized? Did the project release working software as frequently as expected? Is technical debt at a manageable level? These are old questions in manufacturing, but somewhat new to IT projects and auditors.
IT auditors will have to invest time and effort into learning these new paradigms to perform effective postimplementation reviews and audits.
Syed Fahd Azam, CISA, CHFI
Risk assessment exercises are carried out in organizations to manage risk that is inherent to the expansion and diversification of business operations. Regulatory requirements have resulted in the establishment of different assurance functions within organizations. Operational risk management, information security, internal audit, internal control and compliance are few examples.
Standards have also evolved with the emergence of these functions. These standards are adopted by the respective functions to assist in the performance of their due diligence activities, including risk assessment exercises. Because of overlapping of control requirements, similar and redundant activities are carried out by multiple units. This results in duplication of efforts for the department undergoing the review process.
While typically there is consideration for the convergence of the activities into an integrated assurance process, this is rarely adopted even though control mapping documents are available for guidance. To manage the risk in the organization in a holistic manner, the assurance functions should adopt an integrated risk assessment strategy.
Identification and synchronization of overlapping activities should be carried out to eliminate redundancy. Control self-assessment with common control requirements listed can be developed. The risk can then be calculated on the basis of asset value and control presence to determine its impact and probability. This could benefit the organization through comprehensive risk assessment exercises across the enterprise at a relatively low cost.
Davis A. Porras Rodriguez, CISA, CISM
En muchas ocasiones se cree que una “Auditoría Integral” es también una “Auditoría Integrada”. En el campo de auditoría el término integral se refiere a un todo el cual puede o no tener un enfoque integrado, sin embargo, el término integrado podría abarcar el todo pero de una manera más eficiente.
Por ejemplo, típicamente las auditorías externas ya sea por firmas consultoras o entes reguladores realizan una auditoría integral y no integrada ya que su especialista en TI abarca los riesgos tecnológicos, el de PLD/FT cubre los riesgos relacionados con prevención de lavado de dinero, otro se encargará de los riesgos crediticios y así sucesivamente, sin embargo, estos nunca se comunican y apoyan hasta el momento de consolidar los resultados de su evaluación aislada para presentar un solo informe de auditoría a la institución que brindan el servicio.
Para que la auditoria tenga un enfoque integrado, en las distintas actividades y/o fases de la auditoría se debe apreciar un trabajado coordinado y en conjunto del equipo de auditores. Por ejemplo, los riesgos deben ser evaluados por todo el equipo de especialistas que participan en la auditoría con el fin de lograr con visión holística una valoración más adecuada sobre los riesgos y dar prioridad en la auditoría a los riesgos realmente significativos y que impactarían de realizarse negativamente al negocio.
Se podría decir que es más fácil, rápido y práctico que una unidad de auditoría interna adopte un enfoque integrado en sus evaluaciones a que los auditores externos lo hagan. Sin embargo, una auditoría con enfoque integrado se puede llevar a cabo indistintamente si es alguien interno o externo a la organización, lo importante es lograr esa comunicación, coordinación e integración del equipo de trabajo en las distintas actividades y/o fases de la auditoría para evitar duplicidad de esfuerzo y una inadecuada valoración de los riesgos, por lo que el supervisor de auditoría tiene que jugar un papel mucho más activo e ir más allá de lo tradicional en las auditorías en donde por ejemplo, el especialista en prevención de lavado de dinero es el único que puede opinar en la evaluación de riesgos PLD/FT cuando muchos de estos están vinculados con riesgos tecnológicos, riesgos crediticios, otros.
Lea el artículo de Davis A. Porras Rodríguez publicado recientemente JournalOnline:
“Auditorías Integradas—Un Modelo Práctico,” ISACA Journal, volumen 5, 2013.
Regardless of whether we are talking about monitoring the status of security, evaluating its effectiveness or assessing the return on security investment (ROSI) of countermeasures put in place, major concerns must always be the relevance and the quality of the indicators used. Anyone who builds a dashboard needs to ensure not only that it conveys useful information but also that it remains stable over time.
Management needs to grasp information quickly; therefore, would you consider providing a security index that is based on indicators or metrics that are already being used in the dashboard? Such an index would make it possible to aggregate those indicators, greatly simplify representation of the security posture and allow tracking their evolution over time.
Indexes have long been used in different areas to aggregate measures and statistics and to present trends (e.g., stocks, economic growth, consumer, productivity). They do not, however, provide detailed information on the subjects they cover (e.g., the value of a stock market index will remain the same despite the opposing trend of the underlying prices of certain stocks). So, can we use the same approach to assess our security posture in the company? What will such an index be used for, and what information should it convey? Can we envision one day having a standard security index to compare ourselves against or to monitor the general trend of the security posture in a given economic sector? These issues have no immediate answers.
We can, nevertheless, offer to create and use a security index to synthesize indicators and metrics that are available in the dashboard: state of risk factors, maturity of processes, efficiency of operations, costs, etc. We could construct an index for each of these areas or even a composite index for all the elements. The more consolidation there is, though, the more information will be diluted.
For example, to build an index for risks, we can use a simple formula—e.g., the total weight (probability impact) of high risk factors or the ratio between total weight of high risk factors and total weight of other risk factors. Such an index would then make it possible to observe how risk evolves over time and correlate this with security investments.
Shah H. Sheikh, CISA, CISM, CRISC, CISSP, CCSK
There has been a great deal of focus and attention on cloud services and the benefit they bring—financially and operationally—to organizations. Little is done to understand the exact security implications that are likely to be faced when discussing security in the cloud. When this particular topic comes into question a plethora of subject areas related to security come into play, such as data security (residency, custodianship, destruction, transference), legal liability for data that cross as transborder data flow, disaster recovery, service level agreements, right-to-audit clauses, cloud service risk management, service monitoring, security auditing, and logging in the cloud. The list goes on.
When an organization is looking to outsource elements of IT services into the cloud through the platform, software or infrastructure (PSI) as a service model, it is important to establish a cloud security charter outlining the security requirements from a high-level perspective that are aligned with the organization’s information security policies. The charter itself is driven either technically or through management but ultimately must have the blessing of the board. In its simplest form, the charter identifies what is required from an information security perspective to rubberstamp and approve transitioning services into the cloud. The establishment of the charter is intended to align the organizationwide strategy for cloud adoption and signifies the role information security plays in that strategy and the overall governance.
Following good practices in the industry and implementing standards developed by international organizations provide a solid framework on the life cycle of managing security within the cloud. The issue of cloud security cannot be avoided, and the answer is certainly not product- or solution-based. The traditional security professional mind-set must change, new security techniques need to be adopted and a framework that addresses and manages security risk in the cloud throughout its life cycle needs to be nurtured into the overall cloud transition process from the onset.
Simon Roller, CISA, CISP, DPSM, FBCS CITP
I was recently involved in an organizational skills assessment where we assessed approximately 300 IT staff using the Skills Framework for the Information Age
(SFIA). We used SFIA to identify the skills present and the levels of responsibility at which individuals were operating. SFIA uses 7 levels of responsibility, with each level being represented in terms of autonomy, influence, complexity and business skills. We would generally expect each of the 4 components to be at the same level, but what was interesting is that almost 20 percent of the staff had a level of autonomy that was lower than the other 3 components. If individuals were operating at SFIA level 5 for influence, complexity and business skills, their autonomy level was at level 4.
We did some further investigation as to why this was the case and found some interesting findings. In this example, the operating model of the organization lacked the maturity to support autonomy. Let me explain using a well-known management framework, PRINCE2. As you may be aware, PRINCE2 has 7 principles, 7 themes and 7 processes. One of the critical processes in PRINCE2 is managing product delivery, in which the main objective is to manage the link between the project manager and the team manager. The first, and arguably the most important, activity within this process is accepting a work package. It is during this activity that the team manager and project manager negotiate tolerances for risk, quality, cost and time, and the mechanisms that are employed to communicate information. If this process and activity is followed correctly, the team manager has a clear structure for responsibility and autonomy. If things start to go wrong, there is an agreed process to seek guidance and manage risk. If this process is not in place or not adhered to, the project manager ends up micro-managing the team and the team manager loses autonomy. Although the organization in this example was a PRINCE2 shop, this critical process was not embedded correctly and team members lacked the autonomy they needed. The end result was a demotivated team and an overstretched project manager.
Using SFIA to identify flaws in operating models and process adoption is an excellent way to validate that Responsible, Accountable, Consulted and Informed (RACI) charts; authority levels; and delegation techniques have been understood by and embedded in the organization. Without autonomy, there is a real risk that people will become increasingly demotivated and eventually leave.