Other Blogs
There are no items in this list.
Knowledge & Insights > ISACA Now > Categories
Cybersecurity is a Proactive Journey, Not a Destination

Mike WonsCybersecurity continues to grab spotlight and mindshare as it pertains to computing and social trends.

The topic itself is broad and expansive, and the true impact of this segment of computing will be around for generations to come. For strong perspective on where the industry stands in its current state, ISACA’s State of Cybersecurity 2018 research is a must-read. This report provides a great assessment of what needs to happen in the cybersecurity field to move from reactive to proactive.

Challenges around cybersecurity are not new and have actually been around since the dawn of computing. However, it is now a topic that everyone talks about. It is a board topic, it is a public safety and livelihood topic, and it is a personal topic. Hitting this trifecta of impact has finally created the sense of urgency and the attention that is needed. Now, the key is that as an industry, as a country, and as a world of over 7 billion people, we need to effectively address these industry challenges to preserve the computing environment for the future.

Today, most cybersecurity efforts are focused on what is referred to as the “EMR” model of educate, monitor, and remediate. This approach is effective but is essentially like the game of “whack-a-mole,” where the core underlying risks and issues are never solved and keep popping up.

So, how does the governing of cybersecurity become proactive?

While EMR is essential, the core foundation of a more secure and trustworthy computing experience requires being more proactive. Proactive means ongoing, real-time, continuous self-testing and self-assessment, and a laser focus on education as it pertains to best practices. This, combined with a continued evolution on the new SaaS (security-as-a-service), will help mitigate and ensure more trust in the future. Still, it will be very difficult to solve all cybersecurity challenges due to the technical debt that exists and will exist for the immediate future.

Safe and secure computing can occur with a connected, comprehensive approach to security embedded in each of the leading digital disruption levers, from the Internet of Things, to conversational artificial intelligence, to blockchain and distributed ledger technology, to wearables and mobility. Industry focus, industry standards, close adherence to best practices, and the constant ability to randomize to protect digital identities is on the horizon and needs to continue to gain acceleration.

However, first and foremost, security best practices begin at the code level. As software engineers and as an innovation industry, we must make sure this is well-executed in each and every opportunity we have.

Author’s note: Mike Wons is the former CTO for the state of Illinois and is now serving as Chief Client Officer for Kansas City, Missouri-based PayIt. Mike can be reached at mwons@payitgov.com

Five Keys for Adaptive IT Compliance

The fluid technology and regulatory landscape calls on IT compliance professionals to be more flexible and proactive than in the past to remain effective, according to Ralph Villanueva’s session on “How to Design and Implement an Adaptive IT Compliance Function,” Monday at the 2018 GRC Conference in Nashville, Tennessee, USA.

The IT compliance function serves as an important bridge between the audit and IT departments, in addition to articulating business-related IT and security initiatives to management, and recommending and implementing appropriate compliance frameworks.

Business model changes, legal considerations, government requirements and evolving industry regulations are among the common reasons that organizations may need to more frequently explore switching their frameworks than in the past. Villanueva, IT security and compliance analyst with Diamond Resorts, referenced the General Data Protection Regulation (GDPR), which became enforceable in May, as an example of a recent regulatory shift that could have significant compliance ramifications. Additionally, he cited industries such as banking, healthcare and gaming as having special requirements calling for the use of compliance frameworks.

While acknowledging that the need to explore new or additional frameworks can cause “compliance anxiety” and organizational resistance, considering the corresponding investments in time and resources, Villanueva said effective use of people, processes and technology can make the process worthwhile in the long-run. Given the increasing need to implement different frameworks to deal with a growing set of compliance complexities, Villanueva laid out five steps to be actively compliant across several frameworks while remaining in line with budget realities:

  1. Understanding beats memorizing. Compliance professionals who truly understand the intent of the framework are best positioned to adapt them to their organizations.
  2. Know your organization. Having a clear handle on the organization’s business model, mission and array of information and technology resources allows for more strategic compliance.
  3. Anticipate how today’s trends will influence what you do tomorrow. Variables such as the need to incorporate more mobile device security and use of emerging technologies such as artificial intelligence (AI) and machine learning may call for recalibrating compliance processes.
  4. Know that some fundamentals never change. Despite the volatile landscape, Villanueva said there still needs to be focus on established compliance priorities such as application controls and segregation of duties.
  5. Keep learning. Investing in personal development and prioritizing networking are some of the best ways to keep current and “future-proof” career paths.

Villanueva cited COBIT 5, NIST 800-53, ISO 27001:2013 and PCI-DSS 3.2 as examples of useful frameworks for compliance professionals, and said identifying commonalities among different frameworks can make for a more efficient approach. Villanueva recommended IT compliance frameworks because they:

  • Simplify compliance;
  • Reduce the likelihood of missing compliance requirements;
  • Maximize everyone’s time;
  • Allow for clearly understood expectations;
  • Are commonly accepted by control stakeholders.

The importance of compliance professionals should not be overlooked. Aside from potential legal ramifications resulting from inadequate compliance, Villanueva said having strong compliance programs in place is critical to deter corruption and costly illegalities.

“We’re here to make sure that crime doesn’t pay,” Villanueva said.

Cultural Considerations of Adopting Application Container Technology

Robin LyonsThe benefits of application containers have been shared across a variety of forums and to a diverse audience. The ability to have more application instances without a corresponding increase in hardware is probably the primary benefit that is used to persuade enterprises to adopt application containers. But if that is the primary benefit, meeting the objectives of the rapid deployment associated with DevOps is a close second.

Application containers allow developers to easily modify and test because applications are siloed in their own containers. So, the benefits are appealing from a cost savings perspective as well as support of DevOps deployment. Is there a downside, though?

Perhaps it is not a downside as much as a consideration, but as organizations adopt application containerization, some cultural shifts are necessary. These shifts relate to operational processes that organizations may already have in place; however, containerization requires doing those familiar processes differently. Because the change is for an existing process rather than the implementation of something new, the change is more cultural than operational. For example, in a traditional application environment, generally, there is a structured process for code review, which the time to deployment accommodates. As deployment time is shortened (as in a scenario involving DevOps and application containers), organizations may be challenged in how they perform formal, structured code reviews. So, a cultural shift to identify (and accept) solutions that provide assurance around secure coding in the containerized environment despite the rapid speed of deployment may be required.

Another area where a cultural shift may be required relates to access. Unless an organization develops a strategy around administrator access, it is possible for administrators to have access to multiple hosts, containers and images rather than the specific hosts, containers and images to which the administrator needs access to perform job responsibilities. Ensuring that a least privilege strategy is implemented would addresses this. Also, beyond internal expectations, several compliance initiatives, such as the Health Insurance Portability Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS) and the General Data Protection Regulation (GDPR) rely on strong access controls.

Lastly, an organization’s approach to authentication may require a cultural shift. In administering workloads, orchestrators potentially place workloads that have varying levels of sensitivity on the same host. To address this, an orchestrator may have its own authentication directory. This directory, however, may be separate from other non-orchestrator authentication directories in use. As a result, the orchestrator’s authentication directory may have different authentication practices. A concerted effort to ensure alignment of authentication practices for all directories (orchestrator-related or not) may be necessary. These efforts may include, but are not limited to, restricting administrator authentication access to specific repositories rather than multiple repositories.

The benefits of adopting application containers are appealing. More application instances may be possible without incurring the cost of additional hardware and deployment time may be reduced. Effective adoption, however, depends on how organizations can modify existing protocols to accommodate the containerized environment. Code review, access and authentication are examples of areas for which organizations routinely have controls but where a cultural shift is necessary. Once these shifts have been made, the benefits or application containers can be fully realized.

IT Audit Co-sourcing Requires a Strategic Touch

Mais BarouqaThe 7th annual IT Audit Benchmarking Survey shed light on several IT challenges that are at the top of the agenda for executive management and will have a direct impact on IT audit plans for many enterprises in 2018.

While the survey highlighted several key challenges, I will be drilling more in-depth into one key aspect, which is the co-sourcing of IT audit. Within the survey, it was noted that IT audit’s role has grown since 2012, in that half of all organizations now have a designated IT audit director. Such growth emphasizes the importance of the IT audit role. Given the current technological advancements, IT audit plans are required to be aligned and inclusive of the risks that accompany them. That not only requires a different set of skills that are needed in order to have value-added audit results, but also requires internal management to reconsider their IT audit plans.

Before applying a co-sourcing practice, management should assess its current internal IT audit skills in order to clearly understand what should be added by the co-sourced team and what can be covered by the internal department. In order to conduct such an assessment, management should have started to identify the technological areas for the upcoming IT audits during the early planning stages. Moreover, the internal audit department holds a better understanding regarding the scoped systems, infrastructure, and processes, whereas such details will require further time for the co-sourced team to grasp. Accordingly, audit deadlines should take this into account while preparing the plan in order to deliver valuable audit results.

Another point that should be taken into consideration prior to co-sourcing is the emphasis of knowledge-sharing by the co-sourced team to ensure that the skills of the internal team members have been elevated and enhanced by the co-sourced practice.

Co-sourcing practice is applied by management in order to leverage the business and technical exposure of such individuals within the areas lacked by the internal IT auditors. Management should not utilize the co-sourcing practice to enforce a complete transformation of the internal audit to match the co-sourcing company. Having that said, management should always ensure that the company’s internal practices are applied and taken into consideration throughout the co-sourcing team’s deliverables and work.

Auditing and Knowledge Management

Diana HamonoHave you ever wondered what happens to all of that data, information and knowledge collected and created by internal auditors? Have you ever thought about audits you performed in the past; all that research, information gathering, development of findings, the useful collection of methods, questionnaires, test plans, etc.? Wouldn’t it be useful to share your learnings with your colleagues?

After 30 years in the internal audit profession, I have seen the data and information collection and sharing methods move from paper-based to electronic in various forms. I have seen new auditors enter the profession trying to learn about auditing, the techniques, technology, and the different methods of collecting and sourcing information, not to mention the best practices for writing the audit report. After learning about knowledge mapping and knowledge management systems, I applied this to developing an internal audit knowledge management strategy and blueprint for an internal audit knowledge management system.

To be a proficient internal auditor requires special skills, attributes, experience and knowledge; not all people have these or know where to find them. Most organisations are always searching for people with the right mix of skills, attributes and experience that could potentially evolve into highly proficient and valuable internal auditors.

In all organizations for which I have worked as an internal auditor, a range of existing complementary systems, tools and business processes have been considered in the design of the knowledge management system (KMS) to ensure a coherent information architecture is designed.

Auditors must collect necessary and sufficient information to produce a rational and comprehensive analysis; many auditors need to document appropriate evidence to explain and defend potentially adverse findings. All auditors require expert knowledge of governmental regulations, business norms and practices, and often generate new knowledge about the regulations, norms and practices that they examine during their engagements.

Because audit plans depend so heavily on the expertise of auditors, the quality and comprehensiveness of the information they collect, and the findings they produce, a systematic approach to knowledge management becomes critical to ensure the accuracy, efficiency and quality of audit engagements across all business disciplines.

Read this white paper about an audit team’s efforts to identify and map its knowledge, and then use that information to develop a knowledge strategy and knowledge management system. Discover how your team can also optimize the quality and management of its knowledge, leading to improved services to all clients.

Knowledge management is concerned with using to best advantage the knowledge and experiences that have been gained across an organisation. Three elements appear in a wealth of literature – data, information and knowledge – and a good understanding of these is the key to grasping the issues faced by many internal auditing organisations. Data is a series of discrete events, observations, measurements, or facts in the form of numbers, words, sounds, and/or images. In the internal audit arena, data can take many forms and can be unstructured or structured. An example of data used in internal auditing is a spreadsheet that contains accounts payable amounts, dates and vendors, purchase order numbers and so on. Information is the organised data that has been arranged for better comprehension or understanding – it has been endowed with relevance and purpose. An example of information used for an internal audit is a transcription of interview notes taken by an auditor after interviewing an auditee to extract pertinent information. One person’s information can become another person’s data. The knowledge that is used and generated by internal auditors can be thought of as a collection of specific data, specific and broad information sets, the skills attained, and experience in similar audit situations.

Being able to effectively manage not only the knowledge of individuals, but also the collective knowledge in the organisation, is crucial to the efficient and effective delivery of outcomes.

It should be noted though, that internal auditing is not a one-person job. Internal auditing requires collaboration and the integration of information and knowledge both from within the auditing organisation and from the auditee’s sources to enable a valuable outcome for all involved.

For an internal auditing organisation to benefit from the knowledge of its staff, it’s important to identify and map the knowledge that is needed to complete quality and efficient internal audits.

Payment Security and PSD2

This year has welcomed the Revised Payment Services Directive (PSD2), but what is the core reasoning behind writing the new security regulation? “There is a revolution in commerce,” Jorke Kamstra stated in his session Monday at ISACA’s 2018 EuroCACS conference in Edinburgh, Scotland.

Kamstra, information risk manager at Euroclear and an ISACA member, described “open banking” as a buzzword and explored the innovations that are having an impact globally on the retail sector. Today, firms such as Facebook, Money Dashboard and Stripe share a common thread in their offering of financial services, despite not being tagged as a “traditional” bank as we know it.

Customers now have the ability to make payments online using their main banking credentials as opposed to dependence on credit details or the services of a bank. In fact, the artificial intelligence (AI) technologies used to build Amazon Go allow for instant payment as customers physically add goods to their basket in store.

This revolution in commerce has a domino effect on the supporting security measures businesses instil. According to Kamstra, “Security needs to be seamless, frictionless and fast in order to keep up with these innovations.”

The roll-out of PSD2 was designed as a solution to this, filling the gaps of older frameworks and reinforcing consumer protection with measures such as two-factor authentication. When comparing existing frameworks with the new PSD2, existing measures cover 80 percent of controls and are also largely risk-based. However, Kamstra commends the new regulation for its mandatory controls to protect consumers and its guidance on regular testing as best practice.

PSD2 presents new opportunities for non-banks in the form of open APIs. By using banks’ APIs, non-banks can enter the financial market without the compliance and infrastructure considerations required by banks.

What is more, this shift change is encouraging a more mobile and customer-centric approach to banking. The expectations of customers are changing, as is the need to streamline banking and create a more personalized experience. Today, customers are more open to banking outside the traditional realms, and if it means they can do so for multiple accounts across the one interface, they are likely to welcome it.

The emergence of new payment methods such as virtual currencies, biometric authentication and Account Aggregation As A Service (AAAAS) are just a few examples of innovations that are gradually being welcomed as the “norm” for consumers.

Kamstra closed his session with four main actions for his audience:

  • Do not forget the customer; people-centric systems are much more likely to be successful than those that are inward focused;
  • Look to yearly auditing and testing as best practice;
  • Fully understand why PSD2 has been implemented to appreciate the opportunities;
  • Use your knowledge as an auditor to go forward and think about how you can innovate in your own organization.
A Governance Perspective of Audit Policy Settings

Ookeditse KamauThe task of establishing and configuring audit policies is usually left to security experts and/or system administrators who are in charge of implementing security configurations, particularly in small-to-medium enterprises with a lean IT structure. There is usually not much guidance on how these configurations are to be managed.

One common mistake that administrators make is failing to define adequate audit trails to enable early detection of security threats and allow for related investigations. The main reason for this oversight is a failure to balance audit trail needs and systems capacity. Some administrators argue that excessive auditing results in production of huge amounts of event logs that are unmanageable. Deciding on what to audit and what not to audit, or what may or may not be omitted, is therefore not just a configuration task, but rather a risk assessment task that should be embedded in the governance structures of the organization’s IT security frameworks.

Risk assessment process over audit requirements
The audit needs of the organization are guided by the regulations, security threat models, information required for investigations and IT security policy to which the organization is subjected. Identification of the possible threats that the organization faces is usually carried out as part of risk assessment. Security events derived from audit policy settings are key risk indicators that the organization should use to measure how vulnerable the system is to the identified threats. It is therefore critical that enabling audit policies should not be taken casually.

System auditing should be considered across the platforms the organization uses – that is, operating systems, databases and applications. Due consideration of what information is obtained from the operating system (OS) against databases and/or applications should be used to streamline the volume of audit data collected and to safeguard servers’ storage capacity. Where the organization decides not to record audit trails at any of the system levels – that is, OS, databases or applications – an impact analysis should be carried out to ensure that the costs of missing such logs are quantified against regulation penalties and organizational risk appetite.

The guidelines
In order to facilitate the systematic review of an organization's audit needs, guidelines should be developed and approved at the appropriate level in accordance to the governance structure of each organization. Having a guideline that outlines the audit policy objectives, risks, threats and data collection points will ensure that adequate audit logs are maintained. This will, in turn, facilitate log monitoring for suspicious events and allow for detailed investigations if the need arises.

The guidelines should not only focus on configuration of audit settings but should also provide guidance on the steps that are to be followed when procuring log management software to manage event logs. Different log management software is designed to meet logging needs of different organizations, and as such, software procured should be in line with the audit objectives and needs of the organization. The one-size-fits-all concept should not be applied.

The configurations of audit policies across the organization platforms should be a secondary task implemented through clear guidelines that promote risk assessment of the organization’s audit needs.

Networking Advice from an Introvert

Adam KohnkeI’m a classic introvert. Early in my IT career, I had no interest in networking with others. I did not see the tangible benefits or understand how networking could be useful to advancing my career interests.

After some time, I realized that I wasn’t connecting with the people inside and outside of my organization to a degree that allowed me to advance socially or professionally. So I challenged myself and made a conscious effort to change my behavior in order to first know my co-workers better and gain useful contacts in the industry as a byproduct. The benefits have been tremendous. Networking has led to recent job promotions, salary increases and further development of the Adam Kohnke brand. 

As an IT Auditor, public speaking with various audiences is a daily routine. In my experience, professional networking sharpens your verbal communication and presentation skills through increased interaction with other individuals paired with certain social components. The "networking avoidance" mindset is pervasive today in many young professionals and corporate culture, as the value of professional networking is not something that we are usually taught by our parents, in school or even at our jobs. It is often, sadly, a learned behavior one discovers alone.

This blog post seeks to share some of my experience with professional networking and some tricks I’ve learned over the last two years of actively practicing.

  1. Identify a specific target audience and goal. Technically speaking, every person, company and institution is a viable target for your efforts, but your networking efforts should be specifically focused on advancing toward your professional and personal goals. Want to get into IT Security? Begin your networking efforts towards contacting IT Security professionals, managers or executives that work in your industry or in a company in which you have interest. Want to learn more about software development? See if a college instructor or student in the field is willing to sit down for a cup of coffee to share notes and ideas on this topic. There’s no limit on where networking can take you or who you can access, so don’t create limits for yourself!
  2. Start with platforms and an outreach message that are comfortable for you. Email and online platforms like LinkedIn are simple and effective methods to start with in order to break the ice and help you gain confidence with your networking approach. These methods provide increased flexibility, more time to experiment and broader coverage versus a random cold call. The downside is that some people do not appreciate the impersonal nature that comes with these electronic approaches, so as you gain comfort in your approach, start adding personal methods into the mix such as lunches, cold calls or the cubical drive-by. When using electronic methods, keep the message short and to the point. Formally track your results to know what’s working and adjust accordingly.
  3. Start small, but be persistent. Networking takes time, effort and some willingness on the other individuals’ part. Touching again on Point 1 above, it usually helps to identify a small group of people (around 12 individuals or less) you have an interest in to see how effective your approaches and methods are. Your response rate will usually vary, but if going above this number, you may suddenly find you have 20 lunch dates in the near future, and what was supposed to be a fun networking exercise turns into a stressful chore. You may also initially not hear back from anyone. It does happen, but understand that it might not have been the right time for that individual. Revisit that contact later or just move on to the next one. There is never going to be a shortage of potential contacts, so don’t give up!
  4. Offer something in return for the other party’s time. A recent experience with a networking contact of mine revolved around improving my own networking efforts and results. This person is experienced in networking, and I sought the contact’s advice. Following completion of our conversation, I asked if I could offer something in return. I ended up issuing a recommendation on LinkedIn that reflected our positive interaction on the subject. It can be something – you could offer to share audit techniques or other applicable industry knowledge in return for that individual’s time. What goes around comes around!
An Agile Approach to Internal Auditing

Meredith YonkerAs internal auditors, we’ve seen an uptick in usage of the term “Agile” in reference to how more and more companies are developing software. Agile software development has grown increasingly popular as both software and non-software companies transition from traditional development methodologies, such as the waterfall model, to a value-driven Agile approach. Like any auditable area, this requires internal auditors to understand the key concepts, evaluate the risks and determine how to effectively audit the process based on pre-defined objectives. However, that’s not the purpose of this blog post. What we auditors find even more intriguing is how the values and principles behind Agile software development apply to the field of internal auditing.

The Agile foundation
Agile is an overarching term for various software development methods and tools, such as Scrum and Scaled Agile Framework (SAFe), that share a common value system. Developed in 2001, the Agile Manifesto provides a set of fundamental principles that Agile teams and their leaders embrace to successfully develop software with agility. Companies that have adopted Agile development practices recognize the urgency to adapt quickly to changing technology and deliver enterprise-class software in a short amount of time; otherwise, they run the risk of becoming extinct.

Some of the top benefits of agile development include:

  • Accelerated product delivery
  • Improved project visibility
  • Increased team productivity
  • Better management of changing priorities

Why apply Agile to internal audit?
At The Mako Group, we have found that applying Agile concepts to the internal audit function is not a new concept, but has never been more crucial than in our current environment. Like the companies we aspire to protect through objective assurance and advice, internal audit must be able to address emerging critical risks and provide relevant insight in a timely fashion. Despite our best intentions, many audit departments still develop a long-term plan that cannot be easily changed and often employ antiquated audit methodologies. If we truly want to add significant organizational value and be a trusted partner with management, internal auditing must evolve, and Agile techniques can help us do that.

Agile internal audit tactics
Just as companies are scaling Agile software development based on the size, capabilities and culture of the organization, the extent of an internal audit function’s agility will vary widely for one group versus another. Nonetheless, we have narrowed our focus to three key areas that every internal audit department should consider when becoming more agile:

  • Planning and prioritizing. Agile development teams utilize a backlog as the single authoritative source of work items to be completed, which must be continually prioritized. Items on the backlog are removed if they no longer contribute to the goal of a product or release; whereas, items are added to the backlog if at any time a new essential task or feature becomes known. Similarly, the internal audit function should maintain a backlog of areas to be audited that is regularly evaluated and updated based on risk exposure. Instead of committing to a rigid audit plan, this approach allows for timely inclusion of new risks or auditable areas throughout the year. The importance of collaborating with stakeholders during the planning and prioritization process cannot be overstated. Before beginning work on a task or feature in the backlog, explicit and visible acceptance criteria must be defined based on end user requirements, which is called the definition of ready. This is met for an item on the audit backlog when internal audit has the necessary resources available and agrees with the stakeholders up front on the scope, the goal of the project and the value to be delivered.
  • Streamlining the process. Iterations are one of the basic building blocks of Agile development. Also known as a sprint, each iteration is a standard period of time, usually from one to four weeks, during which an Agile team delivers incremental value in the form of usable and tested software. Ultimately, items that move off the backlog must be divided into a series of sprints, which provide a structure and cadence for the work. In the context of internal auditing, the fieldwork associated with an audit should be broken into fixed-length activities that are appropriately sized to promote the motivation of a tight deadline without stressing the resources in place. As the goal is to be quick and iterative, versus confined to a pre-determined plan, eliminating unnecessary resources and efforts is instrumental to an audit team’s successful completion of the work within a sprint. Whenever possible, gathering evidence independently, which also alleviates the burden on stakeholders, is an excellent way for internal auditors to be more efficient. Moreover, examples of waste in the audit process commonly include:
    • Distributing requests for evidence that are too vague.
    • Sending emails back and forth when a phone call or in-person meeting would be a more productive solution.
    • Exhaustively explaining every step taken without considering that concise documentation could achieve the same effect.
  • Soliciting continuous feedback. One of the most commonly practiced Agile techniques is a daily stand-up meeting, normally lasting no longer than 15 minutes, in which an Agile development team discusses each member’s contributions and any obstacles. To be truly effective, internal audit team members must regularly check in with each other and not hesitate to raise questions or issues as soon as they come up. Rather than waiting until the fieldwork has been completed to start internal reviews, quality assurance should be built into the daily audit activities.

Furthermore, internal auditors must not wait until the end of an audit to provide results. Early and frequent communication with stakeholders means that the final report or presentation should simply reflect a visual summary of the insights already discussed. We should not only identify opportunities to enhance an organization’s operations but also continuously improve our own audit processes. A crucial role on an Agile team to help foster an environment of high performance and relentless improvement is the scrum master. Acting as the coach of an internal audit team, a scrum master would ensure that the agreed Agile process is followed and encourage a good relationship among team members as well as with others outside the team.

Audit Consideration for Microsoft Exchange

Adam KohnkeMicrosoft Exchange is one of the primary solutions organizations use to provide email services for medium and large organizations. Exchange directly serves as an information transport mechanism and indirectly as a storage medium for organizational data in the form of attachments and email message content. This blog post seeks to cover a high-level subset of some audit considerations surrounding an Exchange 2010 and newer environment to help your organization assess whether proper oversight and controls exist to limit the likelihood of unauthorized information disclosure, disposal or modification.

The Security Access Groups. Exchange privileged access is typically associated solely to the Exchange Administrators group. Starting in Exchange 2010, Microsoft developed an internal Role-Based Access Control scheme that provides additional AD security groups with varying degrees of elevated permissions and rights. For example, members of the Server Management group can modify certain properties of any Exchange Server in the environment. Members of the Organization Management group are essentially an Exchange Admin, just without rights to perform mailbox searches. A total of 11 built-in Exchange Role Based Access groups should be considered for review as it relates to privileged access. The Exchange Administrator group is the sum of all 11 role-based access groups.

Monitoring Group Membership. Exchange comes with 12 privileged security groups (Exchange Administrators and 11 built-in role groups). Your ability to promptly detect and respond in a timely manner to the membership changes of these groups can be useful in a variety of ways. First, this may allow you to proactively identify recon or insider threat-based attacks if processes are in place to monitor and alert when sensitive groups additions occur. A manual alert follow-up may indicate an account addition was unauthorized or associated to an external threat. Secondly, removals from sensitive Exchange groups may be an indicator of a threat agent attempting to lock you out of your systems or prevent your ability to administer the environment prior to launching or following a successful cyberattack.

Auditing Administrator Actions. Exchange provides built-in administrator logging functions, allowing commands or actions performed by privileged users to be captured for review. The logging can be redirected to SIEMs or other repositories for independent and secure analysis. The potential need for this function lies in some of the rights available to privileged Exchange users such as the ‘SendAs’ right, which allows an email to be sent by ‘User A’ while appearing to have come from ‘User B.’ Oh what fun you could have with ‘SendAs’ rights! Admin logging can also capture if hard and soft deletes were issued against another user’s mailbox (think the C-Suite) or if deleted items have been recovered. Check administrator logging status in your environment by issuing the Get-AdminAuditLogConfig | Select *audit* command from the Exchange Administrator shell.

Auditing Mailbox Use. Exchange also provides a mailbox auditing capability, providing a more granular view into a specific user’s mailbox. Using mailbox auditing in conjunction with administrator logging is typically sufficient to provide adequate audit coverage, as Exchange allows administrators with an option to set audit bypass on particular mailboxes which may allow particular admin actions to go unnoticed for extended periods of time. Mailbox auditing serves as a primary mechanism to identify mailbox abuse perpetrated by Exchange privileged users.

eDiscovery and Data Holds. Exchange allows administrators to place litigation holds on data contained in its repository to prevent deletion and to perform item-specific searches across multiple mailboxes. Monitoring when these features are enabled or disabled may allow organizations to identify when users with privileged access are attempting to electronically dumpster dive, perform recon by recovering deleted emails, or cover up unsanctioned actions by disabling data or litigation holds placed on corporate data. Controlling access to and monitoring eDiscovery should be a key control consideration.

1 - 10 Next