Fredrick O. Bitta, CISA
Insider threats are alive today; thus, access to information systems has become so critical that organizations have incorporated a periodic user access rights audit into their information security policies to be carried out by IT auditors. IT auditors need to consistently audit user’s access to applications while cross-referencing the same with related user roles and responsibilities, as captured in the job description, to ensure compliance. Appropriate segregation of duties is key in this review as mismatch is reported and investigated in a timely manner.
Multiagent is an organization of coordinated autonomous agents that interact in order to achieve common goals. An agent is a component of software or hardware that is capable of acting exactingly in order to accomplish tasks on behalf of their users. Agents exhibit the following characteristics: autonomy, reactiveness, proactiveness, social ability, veracity, benevolence, rationality, learning/adaptation and having distinct personality, behavior, name, and role.
Multiagents, being open source, are able to operate in multiple platforms, continuously monitoring what users’ access in the system and comparing it with related roles of the same user, as defined in the job titles. The agents also make comparisons to establish whether there is appropriate segregation of duties within a specific user’s access in the system. The agents are guided in decision making by the three abstractions of access control systems, which include access control policies, access control models and access control mechanisms.
In my recent Journal article
, we are proposing a multi-agent model where autonomous agents represent the various aspects of access controls captured in the job description, active users log and the organizational policy on system access. These agents communicate to establish scenarios in which conflicts exist. The conflicts are defined as either applications accessed by system users not captured in their job descriptions, users accessing the same application as both user and super user, and access policy violations. These conflicts are reported in a risk matrix format as either low, medium or high.
Larry G. Wlosinski, CISA, CISM, CRISC, CAP, CDP, CISSP, ITIL
When IT security started becoming important to the protection of computer systems on the network, a fellow security colleague asked me a question that is even more relevant today. My colleague was an information system security officer (ISSO) who was having trouble convincing a user that annual IT security training was important and was required. The user had asked him, “Who is responsible for IT security?”
When the ISSO asked me the question, I responded “Is this a trick question? Everyone is responsible for security.” I believe this is even more the case today.
Over the past 15 years, I have seen IT security evolve from background checks and physical security concerns to defending the network against malicious software (e.g., viruses, worms, botnets, Trojan Horses, phishing attacks) initiated by mischievous and criminal minds, to defending against threats from established criminal organizations, terrorists and identity thieves. In today’s world, the skill set requirements have increased considerably and they continue to do so. IT security must defend the financial industry from ruin, prevent the loss of corporate secrets and safeguard the information of everyone in the organization (and with whomever they conduct business).
The threats have moved from direct terminal entry into the system (or network) to wireless devices of all sorts. You can see that almost all of the people around you have one or more wireless devices, such as a cell phone, laptop, digital notepad and iPod. Like all new devices, the inventors make products with new features and capabilities for public use, and it is up to those in IT security to protect the confidential, sensitive, personal and mission-critical data within the organization.
When it was once simply a matter of implementing defensive host and network software, the required security skills have evolved—they have become specialized because of the numerous types of attack vectors and the high number of criminal entities. Today, there are IT security specialists in system security assessment (and audit), security architecture, risk and compliance management, network and operations security, application security, incident response, computer forensics, penetration testing, malware analysis, contingency planning, and identity management.
Information security has become so complex and important to everyone that it has evolved into a service. In my recent Journal article
, I provide some insight into the shift in security responsibilities for organizations that have moved their data, and in some cases their systems, to the cloud. For those organizations that think that the ISSO is the only person needed to protect their data and systems, I invite you to read about how the security boundary is changing and how it now takes many people to keep your data secure.
Vasant Raval, DBA, CISA, ACMA
Trusting others we interact with is a norm. Our everyday life is surrounded by scores of people whom we trust: trust that they will uphold their end of responsibility. And when they do indeed act responsibly, the day becomes more predictable and the efforts more productive. There is nothing that shakes loose the faith in normal occurrence; expectations meet the reality and no one notices anything of concern.
When I go to the university each morning, I find the facility has been cleaned, as expected, the night before. The trash was taken out and recyclable bins were emptied. The coffee room is open and no one has spilled anything there, and if someone did spill, it was all cleaned up by the person who ran into the mishap. I go to teach my classes without wearing armor; of course, I trust that my students will not violate my trust in them. It has worked for more than 30 years and I am not nervous about the daily norm of walking into the classroom as I anticipate nothing out of the ordinary.
When you trust others, you do elicit from them—whether they are aware or not—a sense of duty that they perform their part. Any time this implicit contract is violated, you wonder if this was a one-time blip or a permanent change in the trust relationship. A change in expectations spills over in the realignment of trust; now you are more cautious and rather consciously protective of yourself. And this goes for any level of humanity: a person, family, church group, theater goers, institutions and businesses, nations, and the world. A broken trust means steps to curb the defectors, and this may come from society, institutions or the government.
Trust, risk and responsibility are closely related. Trust generates reciprocal responsibility and, at the same time, remains a source of risk in the event that the trust is violated. The response from people and organizations is likely to be consistent with the nature of harm by the defectors. If a person lies and you are not affected in any manner—in fact, you may not even be aware—your response to the trust violation would probably be negligible or none. On the other hand, misleading senior citizens into signing an expensive long-term service contract that they cannot afford could trigger an angry and even violent response from the neighborhood.
Trust is sourced in humans. Technology can only be the enabler in supporting trust and sometimes even in violating trust. Tools provide the means to do things (right or wrong); they don’t drive a human being’s disposition or are bent to be a dove or a hawk. When humans design information systems or innovate technology, the source of trust transfers from those who have designed systems or developed new technologies to the systems and technologies. A driverless car gets to be trusted because you believe the designers have delivered a reliable vehicle that can be trusted. On the other hand, batteries in Dreamliner jets are suspect and people responsible for the product are working to overcome this.
Do you believe trust is transferable to technology and systems? Do we treat trust in humans differently than we trust in technology and systems? I invite you to share your thoughts.
Deepa Seshadri, CISA, CISM
When I visit organizations and talk to their employees, I hear them express many doubts about the various standards, their implementation and implications. The variety of Service Organization Control (SOC) reporting options, provided by the American Institute of Certified Public Accountants (AICPA), only serve to increase doubts in the employees’ minds about their relevance and suitability. These reporting options include SOC 1, SOC 2 and SOC 3 reports.
SOC 1 reports can reassure organizations and auditors about the efficacy of internal controls; similarly, SOC 2/SOC 3 reports help address areas such as security, confidentiality, availability, process integrity and privacy. To make the best use of these reports, organizations need to clearly understand the differences in these reports and the circumstances in which they are applicable.
Some of the considerations in the choice of reports include:
- For what will the report be used?
- Who will use the report?
- What is the objective of the report (e.g., to demonstrate in detail on the controls that are in operation in the organization)?
- Is this a shared service organization saddled with multiple clients with similar processes?
- What is the nature of the business with regard to cloud computing, data center operations, payroll processing, medical claim processing, etc.?
- Do you handle or have access to client personally identifiable information (PII)?
- Do any agreements with customers mandate providing assurance on internal controls?
- Does your report need to cover/map requirements such as those in the US Health Insurance Portability and Accountability Act (HIPAA) or from the Cloud Security Alliance?
In my recent Journal article, I have tried to distinguish between the myths and facts about SOC 1 (ISAE3402/SSAE16) reports in a simple manner. I look forward to your comments on the myths and facts around these reports.
Read Deepa Seshadri’s recent Journal article:
“Common Myths of Service Organization Controls (SOC) Reports,” ISACA Journal, volume 2, 2013.
Tommie W. Singleton, Ph.D., CISA, CGEIT, CITP, CPA
There are many things a person needs to know, learn, acquire and understand about conducting IT assurance and audit work. When it comes to IT audit, there are several things that IT auditors need to learn about conducting an IT audit, three of which are addressed herein.
The first is about the proper performance of an IT audit. This aspect is clearly about technical standards and guidance. ISACA is the leader in providing such information. ISACA’s general standards would be the best choice for technical literature. IT auditors must be competent in knowing and understanding that technical literature as well as other relevant technical literature (e.g., that of The IIA and AICPA).
The second main issue is process. Specifically, where, as an IT auditor, do you begin the process, what steps are involved, and what do you do in each step? This is a function of guidance available in a variety of formats. As a matter of practicality, the steps are usually driven by commonly accepted best practices and the individual entity that is conducting the procedures. In my recent Journal column
, I discuss some resources for properly defining the process. One excellent resource for gaining a thorough understanding of all it takes to do a proper IT audit is ISACA’s IT Assurance Framework (ITAF)
and another is COBIT 5
A third issue is the metrics used to measure effectiveness, compliance and/or adequate controls, which was the focus my recent column
. There are several sources for the “ruler” used by the IT auditor. There is authoritative guidance. There is guidance on being compliant with laws and regulations. And there are best practices or best principles regarding controls (e.g., best practices in information security, backup and recovery, passwords/logical access controls, IT function controls, firewalls, perimeter security).
On occasion, an IT audit has a peculiar nature that brings other literature and guidance into the picture. My recent column
was written to help identify situations that normally need other technical literature and guidance and match the guidance with the type of audit.
Nikesh Dubey, CISA, CISM, CRISC
Today, organizations are more security-conscious and increasingly aware of the role information security plays in the success of the business. Today, keeping information security programs on course and aligned with business strategies and objectives is a top management vital role. However, due to the growing number of portfolios that top management juggles and manages and the unforeseen hurdles and real-world challenges that information security programs face on a day-to-day basis, it can become daunting to accurately know the current direction of information security programs, tasks and activities.
In the maze of excellent information security tools, products and governance frameworks, there seems to be a need for a metric to help top management get an accurate look at the current direction of information security programs and better manage them over a period of time. The intention here is to keep management honest and knowledgeable about its own environments, provide realistic insights, and know the various challenges and actions that directly impact the success of security programs, which in turn helps retain management commitment.
In the beginning, the parameters driving such a metric may be subjective and qualitatively driven, but over a period of time organizations can get knowledgeable insights of the unique parameters that drive them and lean toward a more quantitative approach.
My recent Journal article
is an attempt to highlight and address some of these areas. It is indeed a challenge to derive such a metric; however, I believe in today’s wired world where all aspects of business are impacted by the capabilities, reach and breach of information technology, such a pulse-tracking metric could be significant and ground breaking in the success of information security programs.
Read Nikesh Dubey’s recent Journal article:
Rajesh Bhatia, CISA, CGEIT, PMP, MDP
Governance models need to be implemented in most outsourcing vendor engagements to gain value, efficiency, effectiveness, productivity and return on investment (ROI). This is because when working with a vendor, strategic alignment, value delivery, risk management, resource management and performance measurement need to be ensured (Board Briefing on IT Governance, 2nd Edition and COBIT 5
). Operating in silos with a vendor is akin to walking into a deathbed. In other words, productivity, value and ROI will not be attained from the engagement.
In this regard, recently, I was asked to design a statement of work (SOW) for a vendor engagement for a transformation project at my company. I looked at the standard SOW templates and noted the presence of common requirements like project scope, timeline and milestones, high-level schedule, and acceptance criteria. Surprisingly, what I did not find in any template are the details on the implementation of the governance models and control processes.
The Global Status Report on the Governance of Enterprise IT (GEIT)—2011
clearly states that optimal governance enablers need to be in place to ensure direction and monitoring of vendor performance, procurement of services, definition of service level agreements (SLAs), and the review of demand and supply decisions on sourcing models. This is due to the fact that 93 percent of the responding companies had fully or partially outsourced some activities. Also, the report mentions that although the consulting companies had high capability of implementing governance solutions, they had also received scores from respondents placing them on the poor end of the spectrum.
This leads me to believe that with such a high rate of outsourcing, governance models and control processes need to be designed into the vendor engagements. A lack of design of governance models and control processes will probably lead to failure of the engagement and frustration on both sides.
So, I took it upon myself to modify the SOW template and include a section for governance models and processes. I started thinking about the things, like the alignment of goals, ensuring value is obtained from the engagement, monitoring and tracking vendor performance, risk management, typical governance structures (i.e., decision-making structures), processes, controls, and communications, that will be required. In this case, since we are dealing with a vendor, presence of adequate controls is essential to ensure appropriate project performance, vendor performance, vendor compliance with standards, and processes for escalation management, change control, etc. Thus, I came up with an essential list of controls that must be present, including phase gate reviews, peer reviews, SLA reviews, daily project management meetings and business sign-off. Now we have a comprehensive vendor SOW.
Read Rajesh Bhatia’s recent JOnline article:
Nurudeen Odeshina, CISA, CISM, CRISC, ISO 27001 LI, ITSM
In most organization's in Nigeria today, especially those within the public sector, an increasing amount of funds are being allocated to and spent on physical security. While there may be a valid business need for these investments in physical security, an important question to ask is: “What is being done to secure information?”
Information is increasingly becoming the most important and most valuable asset to any organization. Organization thrives on information and its associated assets acts as an enabler—information technology. As a result of the value of information to organizations, it becomes appropriate that it must be suitably protected and preserved in terms of confidentiality, integrity and availability (CIA). In most cases, protecting information goes beyond the physical. Processes, people and technology are other factors that come to the fore.
There are instances where organizations are reengineering processes and implementing technologies to secure their information while nothing is being done to secure the people (e.g., employees, contractors, service providers). People are the custodians and users of any implemented technologies or processes put in place within an organization and, therefore, they require a regular dose of TEA (training, education and awareness) on information security.
With regard to processes, a good starting point for any organization is to adopt and implement an information security management system as prescribed by ISO/IEC 27001:2005.
Ali Alaswad, ITIL, PMPG, PMP
The fundamental goal behind transforming efforts from securing merchants’ environments to securing the credit cards is to allow merchants to allocate the time and cost associated with these efforts to the improvement and expansion of the business.
Attaining Payment Card Industry (PCI) compliance is not a straightforward task: Projects implemented to achieve compliance require resources with a diversity of skill sets and merchants’ organizations have to be assessed from different aspects, considering both IT and non-IT perspectives. Typically such an investment can be costly and the cost of a breach can easily be many times the cost of attaining PCI compliance.
The Smart Credit Card proposed in my recent Journal article
eliminates fraud at the merchant level by shifting the liability to the card-issuing banks and credit card holders (The total amount of credit card fraud worldwide is US $5.55 billion annually [source: www.statisticbrain.com
]) and introducing a new credit card with technical features that enable card holders to generate temporary credit card numbers valid for one-time use and to provide the card holders the ability to confirm or decline transactions.
Merchants, card-issuing banks and credit card holders all benefit from this solution.
The value proposition to merchants:
- Reallocate the PCI compliance implementation and sustainability funds to other areas in the organization in accordance with the organization’s defined strategic business objectives.
- Enhance business reputation and increases client trust, leading to the increased use of credit cards for payment.
- Avoid penalties and the risk of not being PCI-compliant.
The value proposition to the card-issuing bank:
- Increase the reliance on credit cards as a method of payment.
- Save the time and cost associated with the administrative work of fraud management related to merchant’s transactions.
The value proposition to card holders:
- Enjoy the protection provided by the Smart Credit Card solution and the associated freedom to use credit cards, as this solution addresses different fraud types (e.g., online and phone banking fraud [card not present], counterfeit card fraud, card loss/theft).
- Save the monetary cost and inconvenience caused by fraud.
- Enjoy peace of mind.
Joanne Joseph, CISA
I would like to set the stage for the exchange of ideas on the points expounded upon in my recent Journal article on data privacy and legal challenges.
In the article, I reviewed certain aspects of data privacy, i.e. threats, types of data at risk, how data privacy breaches occur, impact of privacy abuse on individuals and organizations, as well as legislation and protective measures currently in place across Europe and the US.
To give an example of how a data security breach can occur, quite recently, an officer at a bank was processing a financial transaction for me. Before logging into the bank’s computer system, he accessed his phone and then commented aloud, “Boy, what would I do without this phone? I have all my passwords stored here.”
How many of us store our passwords in a readable format on a mobile device? What protection do we have for these passwords? Furthermore, do we let others know that our passwords are stored there?
This information was inadvertently disclosed to me and I did not even need to know where the passwords to the bank’s computer systems are stored. Based on this example alone, it seems that there may be opportunities within our everyday activities for perpetrators to gather sensitive data.
A number of questions arise:
- Do we have a sense of duty to protect sensitive data?
- What local legislation is in place within our jurisdiction?
- Are company policies enough?
- Is there a magic bullet in solving data privacy issues?
- I look forward to an active discussion on these issues.
Read Joanne Joseph’s recent Journal article:
“How Safe Is Your Private Information?,” ISACA Journal, volume 2, 2013