I recently began taking my first crack at auditing an Amazon cloud platform that comprises over a dozen managed services. While I was excited to add this new wrinkle to my skill set, I had no idea where to get started on identifying key risks applicable to each service or how to approach the engagement. Searching online eventually led me to the AWS training and certification website. My intuition initially suggested to me that Amazon was not very likely to help me audit their services, or even if they did, there probably would not be much free information available that I could leverage to obtain sufficient understanding of the service architecture or operation. Well, I was dead wrong!
This blog post covers my experience with the free Cloud Practitioner Essentials course offered by Amazon and some of key takeaways I obtained from various sections in this training.
Topics covered and use for auditors
The course takes about eight hours to complete, divided among five major sections (Cloud Concepts, Core Services, Architecture, Security, Pricing and Support). If if you can forgive Amazon’s “pointing two thumbs at itself” advertising tone, you can start picking up key risks and audit focus areas as you march on. My key takeaway from the Core service overview portion of the course was about Trusted Advisor, which is a management tool auditors can utilize to obtain a very quick glance at how well their organization is utilizing the collection of services. Trusted Advisor highlights whether cost optimization of services is being achieved while providing recommendations on how to improve service usage. It also reports on whether performance opportunities exist, if major security flaws are present, such as not utilizing multifactor authentication on accounts, and the degree of fault tolerance that exists across your compute instances. Trusted Advisor is not the be-all, end-all as it uses a limited set of checks, but provides a quick health check against the environment. A list of Identity & Account Management (IAM) best practices were also provided along with details on how to validate them.
Security, shared responsibility and suggested best practices
Amazon does a good job of pounding the shared responsibility security concept into your head for managing its services and touches on this concept throughout the training. Amazon is responsible for Security of the Cloud (physical security, hardware/firmware patching, DR, etc.) and the customer is responsible for Security in the cloud (controlling access to data and services, firewall rules, etc.). My key takeaway from this portion of the training centered on some of the best security practices for managing your root user accounts, issued to you upon establishing web services with Amazon. Root accounts are the superuser accounts that have complete access and control over every managed service in your portfolio. Because you cannot restrict the abilities of the root account, Amazon recommends deleting the access keys associated to the root account, establishing an IAM administrative user account, granting that account necessary admin permissions and using that account for managing services and security.
Detailed and helpful audit resources
The pricing and support section of the training provided very useful metrics that financial auditors and purchasing personnel should be interested in to help them determine the cost and efficiency of managed services. This portion of the course covers fundamental cost drivers (compute, storage and data transfer out) and very specific cost considerations for each fundamental cost driver, such as clock time per instance, level of monitoring, etc.
My key takeaway from this portion of the course was the overview of the support options – more specifically the audit white papers that are maintained and made available at no cost. There are very specific security audit guides, best practices for security, operational checklists and other information that will allow an auditor to build an accurate and useful engagement program.
This course led me to the Security Fundamentals course, which provides enhanced audit focus on several topics of interest to IT auditors, including access management, logging, encryption and network security.
In conclusion, I am surprised and extremely impressed with the amount and depth of free resources Amazon makes available to the auditing community regarding its services. I wish other providers would take similar measures to assist the information security community with understanding what is important and how to gain assurance on its secure operations.
Editor’s note: View ISACA’s audit/assurance programs here.
The US Federal Communications Commission (FCC) recently repealed the net neutrality guidelines that it implemented less than three years ago. There has been much discussion, speculation and concern about how that move will impact the Internet, small business and consumers. Many people have suggested that one effect of the repeal will be that video streaming and other cloud hosted, web-delivered media will start to cost much more for the consumer.
It became unlawful for broadband providers to decide to slow down or block certain web traffic when the 2015 regulations were enacted by the FCC. Actually, the 2015 rules did not incorporate enterprise web access, which is often custom. They did, however, safeguard the flow of data to small businesses.
FCC chair Ajit Pai and other lawmakers take the position that the policies and practices of net neutrality are unnecessary rules which make it less likely that people will invest in broadband networks, placing an unfair strain on internet service providers (ISPs). That perspective does not seem to be aligned with that of the public, according to a poll from the University of Maryland showing that 83% of voters favored keeping the net neutrality rules in place.
What is net neutrality, exactly?
The basic notion behind the concept of net neutrality, according to a report by Don Sheppard in IT World Canada, is that the government should ensure that both all bits of data and all information providers are treated in the same way.
Net neutrality makes it illegal to have paid priority traffic, throttle, block, or perform similar tasks (see below).
Sheppard noted that there are two basic technical principles related to internet standards that are part of the basis for net neutrality:
- End-to-end principle – Functions related to applications should not take place at intermediate nodes but instead at endpoints within networks used for general purposes.
- Best efforts delivery – There can be no performance guarantee but instead a demand for best efforts for equal, nondiscriminatory packet delivery.
IoT: harder for startups to compete?
Growth of the Internet of Things (IoT) is closely connected to the expansion of cloud computing – since the former standard uses the latter as its backend. In terms of impact on the IoT, Nick Brown noted in IOT For All that the repeal of net neutrality will result in an uneven playing field in which it will become more difficult for smaller organizations, while larger firms will be able to form tighter relationships with ISPs.
The issue of greater latency is key to the removal of net neutrality, because latency could arise as sites are throttled (decelerated). The reason that throttling would occur between one device and another is that ISPs may want some devices (perhaps ones they build themselves) to have better performance than others.
Some people think the impact of the net neutrality repeal on the IoT will be relatively minor. However, many thought leaders think there will be a significant effect since IoT devices rely so heavily on real-time analysis.
Entering a pay-to-play era
Throttling, or slowing throughput, could occur with video streaming services and other sites. Individual cloud services could be throttled. Enterprises could have difficulty with apps that they host in their own data centers, too, since those apps require a fast internet connection to function as well.
There will be a pay-to-play scenario for web traffic instead of just using bandwidth to set prices, according to Forrester infrastructure analyst Sophia Vargas.
There is competition between wired and wireless services that has resulted from changes to their pricing models following the repeal of net neutrality, said Vargas. The pricing is per bandwidth for wired, landline services, while it is per data for wireless services. Wireless services will have the most difficulty because wired services are controlled by a smaller number of ISPs.
There will be more negotiations and volatility in the wireless than in the wired market, noted Vargas. Competition is occurring “in the ability for enterprises to essentially own or get more dedicated [wired] circuits for themselves to guarantee the quality of service on the backend,” she added.
Does net neutrality really matter?
The extent to which people are committing themselves to one side or the other gives a sense of how critical net neutrality is from a political, commercial and technical perspective. A consumer should be aware of the potential for companies to mistreat them without these protections in place (which is not to say those abuses will occur).
Ways that ISPs could perform in a manner that go against the precepts of net neutrality are:
- Throttling – Some services or sites could be treated with slower or faster speeds.
- Blocking – Getting to the services or sites of competitors to the ISP could become impossible because those sites are blocked.
- Paid prioritization – Certain websites, such as social media powerhouses, could pay to get better performance (in reliability and speed) than is granted to competitors that may not have the same capital to influence the ISP.
- Cross-subsidization – This process occurs when a provider offers discounted or free access to additional services.
- Redirection – Web traffic is sent from a user's intended site to a competitor’s site.
Rethinking mobile apps
Another aspect of technology that will need to be rethought in the post-repeal world is improving efficiency by developing less resource-intensive mobile apps that are delivered through more geographically distributed infrastructure. Local caching could also help, and delivery of apps that serve video and images should potentially be restructured.
You can already look at file size to create better balance in the way you deliver video and images to mobile users. However, the rendering, quantity that is stream-loaded (to avoid additional pings), and other aspects are optimized with net neutrality as a given.
Providers of content delivery networks (CDNs) will need to re-strategize the methods they use to optimize enterprise traffic.
Cost has been relatively controlled in the past, according to Vargas. There is an arena of performance management and wireless area network (WAN) optimization software that was created to manage speed and reliability for data centers and mobile. Those applications will no longer work correctly because they were engineered with traffic equality as a defining principle. Hence, providers will have to adapt to meet the guidelines of the new paradigm.
One of the biggest misconceptions regarding the cloud is that you can rely on the cloud provider service to protect your business, your data and everything else your firm holds dear.
Take a minute to think about your own home security system. Do you just lock the doors with the key and head off to work, fully secure that your valuables will still be there when you get back? Not likely. Many of us have at least a simple alarm system in place on doors and windows. More and more people are heading toward the latest trends in home security: motion sensors, 24-hour video cameras, remote door answering, etc.
Why does securing your cloud matter? Three enormous reasons:
- Your cloud provider is only managing part of your security.
- Cloud security lowers the risk of data breaches.
- The minimum level of security compliance should never be enough.
Your security vs. cloud security
Let’s talk about your security against the cloud service provider’s security. The provider has specific language in any contract it signs with you concerning what it is and isn’t responsible for if there is a security breach. In its 2016 “Cloud Adoption & Risk Report,” SkyHigh Networks reported that the average user in an organization employed 36 different cloud services at work. That’s 36 potential security breach points into your cloud and 36 ways for information to leak out. By introducing all of the apps you need to make your business run to your cloud environment, you must take on the responsibility of ensuring that they are only serving their necessary capacity when analyzing and manipulating the data stored in your cloud.
It is integral that you manage all of your cloud-based applications and treat them all as security risks until the day you can scratch them off that list. The old days of hiring a third-party app to plug-and-play into your network are long gone. Your best way forward should be with a Security-as-a-Service (SECaaS) solution. Just like your infrastructure, software and your share of the cloud itself, SECaaS is the scalable solution that can handle your growth but also downgrade in the event your business shrinks. Even an in-person, onsite IT expert is not available 24 hours a day, 7 days a week, but a SECaaS is. The service can deploy solutions instantaneously when problems or suspicious activities arise, unlike in a traditional setting where everyone is waiting around for the IT professional to respond to a call for help.
The high price of data breaches
As for breaches, a 2016 study showed that the estimated cost of a data breach for a company is US $4 million. If your company has an extra $4 million lying around, by all means don’t fret about your cloud security. That figure might seem high at first glance, but there’s far more at work here than merely a loss of data or intellectual property. When you take a public data breach, word travels fast. Your best employees will be more receptive to offers from competitors. Your recruitment will suffer as those entering the workforce and those seeking to switch employers will take a lot harder look at what sort of company gets breached and what kind of company they’re looking to work for. And last but not least is the impact your data breach will have on your company’s public perception. The public has an incredibly long memory when it comes to embarrassing incidents for public companies. Don’t believe it? Fast-food giant Jack in the Box had a scare with mislabeled meat in 1981, and 37 years later, it’s still one of the top Google results for the restaurant chain.
Nobody wants the minimum
You didn’t get into business to do the bare minimum when it comes to protecting your assets and your customers’ information. No salesman has ever told a customer that he’d do the absolute least amount of work he could to get the customer’s business. The same excellence you strive for in taking command of your market and maximizing your profits should be applied to keeping your cloud secure.
To ensure the security of your cloud, consider adding dimensions such as multifactor security, where even if an employee’s login name and password are stolen or compromised, the party that took it still cannot access your cloud without an additional layer of security. Simple steps like this can be the difference between a secure cloud system and one just waiting to be picked apart by hackers.
While no one doubts the power that cloud computing has on our present and future digital needs, it still has basic flaws that are cause for alarm: notably concerns over privacy of data and its ability to handle large-scale, constant computations.
The Internet of Things (IoT) continues growing at an exponential rate. Its market is estimated to reach $457 billion by 2020, a jump of 28.5% from 2016. But concerns still loom about its shortcomings when synced up to the cloud, a problem that has tied edge computing to the IoT.
If your business depends on IoT devices being able to parse data seamlessly in real-time to provide instant analysis for your processes and people, then edge computing is not some hazy future vision, but your solution for today’s problems.
Edge computing does what the cloud can’t for IoT: it reduces latency and gives the opportunity for faster processing for IoT devices that are attempting to operate in real time. Things like the new prototype self-driving cars or sensors in hospital rooms tasked with making decisions as vital signs ebb and flow risk catastrophic events if they are unable to process data instantly without delay.
Here are the three biggest reasons your company should be employing edge computing for IoT right now.
Reduced latency. Sometime this year, IoT devices will surpass cellphones in terms of number of connected devices. That’s pretty staggering for a technology most people had no idea existed five years ago. Having the edge computing device located far closer to the IoT object can drop latency speeds dramatically. Edge computing will also determine each device’s processing needs and adjust accordingly; the entirety of a company’s cloud space does not have to come online for the computations of one small IoT unit.
Upgraded network connections. Edge computing guarantees that cloud outages won’t affect individual devices by limiting interactions with the cloud. Only essential functions will be run through the cloud, which will in turn ease the burden on that environment to perform its own functions. Apps that exist solely in the cloud will have more processing power without having to compete for bandwidth against IoT machines that can successfully exist on the network’s edge.
More cohesive privacy. With new laws like the EU’s General Data Protection Regulation (GDPR) coming online in the near future, data privacy has never been a bigger topic in the digital realm. In its earlier forms, security was a low-grade concern for IoT devices like digital thermostats. But as a legion of microphones, cameras and other personal input devices join the IoT, the threat of data loss or theft becomes more real and more harmful. Edge computing can take a considerable strain off making sure data is secure by performing a number of the data’s required computational steps on the machine itself. The network can then send the data along to the cloud after it has been changed, enhancing both the speed at which it is processed in the cloud and encrypting it to lower the risk of theft.
Author’s note: Established in 1994, Atlantic.Net is a hosting solutions provider, with data centers in New York City, San Francisco, Dallas, Toronto, London and Orlando. Marty Puranik is the founder, president and CEO.
ISACA’s newly released report, How Enterprises Are Calculating Cloud ROI, is a landmark piece of research that, in my opinion, validates the notion that we have reached (or are at least rapidly approaching) that tipping-point where organizations realize that moving their IT infrastructures to the cloud is an inevitable, foregone conclusion. The white paper documents the growing trend for organizations to forgo financial ROI analyses as a way to justify investment in cloud computing, instead resorting to intangible returns, such as better application performance, enhanced business agility and improved customer- and employee-experience (Cx/Ex).
But could there also be a more subjective reason fewer companies are performing ROI analyses? There definitely is a growing perception that forward-thinking companies are moving away from internally managed data centers and infrastructure. In many of my interactions with IT professionals from major, Fortune 500 enterprises, I have heard comments like, “To stay competitive, we’re moving all applications to the cloud; anything we can’t move will die in place … ” and, “Our board of directors is asking why we are not taking more advantage of the cloud.”
The age of digital transformation is demanding our attention and driving us to a foregone conclusion: we cannot continue to do it all ourselves. As IT professionals, we are asked to reduce costs, increase productivity, stay a few steps ahead of cyber criminals, support the needs of growing mobile workforces, accommodate hybrid networks, improve Cx/Ex, all the while improving network and application performance. Enterprises, and even governments, are also under pressure to appear competitively on the leading edge by outwardly embracing digital transformation. All of this cannot be accomplished by building on top of traditional IT infrastructure and management models.
The digital age is moving at such a fast pace there may be a general sense that there isn’t time for formal ROI rigor, or that the concept of due-care is taking hold, such that it is now considered irregular and imprudent to not migrate IT functions to the cloud. There was a time when office buildings had their own telephone switchboard operators, and textile mills and factories generated their own electricity. When the automated PBX and centrally distributed AC power emerged, it’s doubtful many organizations did an ROI analysis after it became obvious the old way was the wrong way.
ISACA’s new guidance on cloud ROI reinforces many of these notions. The habit of conducting formal cloud ROI analyses may be coming to an end as traditional IT gives way to the cloud.
As you know, change management is critical to the long-term success of every organization. This is especially true when it comes to IT, where change happens at an astonishing pace. But is your organization where it needs to be?
Guidance for Your Change Management Strategy
There is something equally exhilarating and frightening about change. It is a necessary factor in moving forward and growing as a business, but it’s also unfamiliar and intimidating. Unless you have a strategy in place for managing change – particularly on the IT front – it’s quite likely that you’ll focus more on fear and anxiety than hope and excitement.
That being said, here are some tips to help you approach IT change management from a strategic perspective.
Change management is all about planning ahead and being proactive. Once an issue already has occurred, or your organization finds itself in the midst of a major shift, it is too late. Doing damage control or trying to put out fires will take valuable energy away from other important tasks. Start early and always anticipate what will happen next.
Choose the right software
You don’t have to take on change management all by yourself. Automating some of the process with the right tools can make all the difference in the world. For example, a change management software, such as a help desk, can allow you to simplify the process by providing highly customizable solutions and automated processes to manage change requests and approvals.
It’s also helpful to have some sort of communication tool integrated within your change management software that allows you to reach all key stakeholders whenever and wherever they are. The sooner people are involved in the process, the faster you can get things moving on the right track.
Choosing the right tools becomes even more important the larger the organization is. This is something the California State University (CSU) system has discovered firsthand when it comes to making key changes to its IT system.
Any IT system change that occurs on the main campus has to also go through each of the 23 satellite campuses and the thousands of employees, faculty and students at these locations. So, whereas a small change might not have a major impact at the main campus, it can have drastic effects when compounded over two dozen campuses. In order to simplify the process and make it easier to manage, CSU uses an automated change management system from Cisco that allows upgrades to occur automatically across the entire system. The results at CSU have been far better than if change management were handled manually.
Focus on the outcome
While change management is all about taking the necessary steps to move from Point A to Point B in the most seamless and efficient way possible, the focus always has to be on the outcome. When it’s all said and done, change management exists to ensure your IT department is set up for future success.
One of the biggest mistakes organizations make is assuming that change management is all about deploying the right technologies and setting up the appropriate processes. While these are certainly important components of change management, it’s actually your employees who have to execute.
“If these individuals are unsuccessful in their personal transitions, if they don’t embrace and learn a new way of working, the initiative will fail,” explains Prosci, a leading change management firm. “If employees embrace and adopt changes required by the initiative, it will deliver the expected results.”
You have to learn to prioritize engagement of all key stakeholders; otherwise, you’ll find it challenging to make any progress. Start preparing them early and often.
At the end of the day, change never happens as anticipated. You may have a perfect plan for what you think will happen – and even have complete buy-in from all individuals and departments involved – but something will inevitably go awry. A willingness to adjust will serve you well.
Overlook IT change management at your own peril
The word “change” probably evokes a range of emotions. Your mind may jump to past experiences of change that were negative or unwelcome. Or, perhaps you have had good experiences with change and get excited at the thought of doing something new. But regardless of your personal history with change, you need to prepare your enterprise for the future by developing a specific change management strategy. Embracing technology-driven change management is vital if you want to be successful in the modern business world.
One of the biggest technology advancements in recent years is the expansion of the cloud, allowing users to have more space on their computers or mobile devices, with access to their documents, videos and pictures that are all conveniently stored in one place.
Similar to the commercial security system, the cloud can be used to ensure the safety of documents and other private information. Companies that use the cloud as storage and also as security take advantage of the unprecedented scalability, the quick deployment and the savings that come with it. There also are risks behind using the cloud that include a bigger chance of unauthorized access to private information, legal risks and a lack of control.
With so many people making the switch to the cloud, there are new opportunities for people both in business and in private employment. The cloud can cause confusion regarding who is actually in control. A business owner has control over what happens and how that business is run, but when it comes down to it, the vendor is the one with all the cards. For example, the vendor can change the pricing at any moment, and with clients depending on the services provided, companies are forced to pay whatever price to ensure those services will continue.
Vendors are having difficulty adapting to the changes caused by the outbreak. As they scramble to keep up, vendors can often lose control of the situation. In a survey by ScienceLogic, it was discovered that less than one-third of IT professionals actually have the control they need in order to keep their business efficiently moving forward.
Cloud use is improving faster than the organizations that control it due to security exposures and unnecessary financial costs. As concerning as that may be, the cloud also leads to new business techniques and opportunities that enable innovation. Businesses worried about the future want to know the best ways to help the company succeed. Sometimes this leads to moments of uncertainty and confusion. These moments can benefit a company by helping it and its employees succeed in different situations and environments.
Believe it or not, a degree of chaos can be effective. Companies that risk confusion and a lack of control often jump ahead in their industry. Businesses such as Gartner, Amazon Web Services, Microsoft and Azure have used the cloud as a service to their customers. Each business estimated and received an increase in revenue just by switching to the cloud.
In the business world, it is important to be updated when it comes to technology but even more important to be aware of management tactics. In this case, the cloud is both an advancement of technology and a useful management tactic. In order for a company to truly succeed, it needs to have a culture that thrives on new ideas and new technology. Organizations that stick to old, outdated ways often become overwhelmed when trying to gain control in the fast-adapting technological environment.
Technology clears the path for employees and companies to become part of an innovative business landscape. There are always risks when it comes to new technology, but taking the chance to learn the new developments can help a business take the lead in their field. The cloud provides a company with the chance to use the extra space as an opportunity to not only help the business succeed but also to help its employees discover new learning and business techniques.
Cloud-based computing and storage is increasingly popular—to the extent that some companies are cutting hard drive space to encourage users to shift toward the cloud. And while the cloud is convenient, allowing your files to travel easily and across devices, that kind of convenience isn’t exactly what you want when it comes to protecting medical files. Is your cloud use secure enough to meet Health Insurance Portability and Accountability Act (HIPAA) standards? Here are some factors to consider.
A Quick Overview
There are a lot of cloud systems available these days, but the first thing you should do when choosing one is compare baseline HIPAA compatibility. Amazon S3, Dropbox and iCloud are not compatible with HIPAA practices out of the box. Most other major systems, including Box, Egnyte, Google apps, and CrashPlan Pro are HIPAA compliant. Identifying the outsiders reduces your choice of cloud systems, allowing you to focus in on the details of compliant plans.
EHR or HIPAA
In addition to cloud computing, many physicians are shifting to digital recordkeeping, using what are known as electronic health records (EHR) systems. These systems are great for centralizing patient data and encouraging collaboration across different medical practices that share the same EHR vendor. However, EHR requirements and HIPAA privacy standards aren’t exactly the same.
The first rule of managing EHR in accordance with HIPAA standards is that you should never trust an EHR vendor that says you don’t need to worry about their HIPAA compliance. Although your specific files may be HIPAA compliant, other practices used by external vendors may not be; for instance, their cloud storage security may be lacking. Additionally, although EHR systems have all the features needed to be fully HIPAA compliant, you’ll need to check to make sure they are properly configured. If necessary safeguards are turned off, your patients’ data may be at risk.
Don’t Play Hide and Seek
Rather than establishing thorough HIPAA compliant practices, some organizations still think that what is known as “security through obfuscation” is a valid system providing the necessary protections. Realistically, though, this is possibly the worst of all security practices. This kind of security focuses on hiding your computer network, but tends to disregard proper antivirus software.
Additionally, such practices tend to reveal other lacking security practices within the organization, such as indiscriminate file sharing (between virus-infected computers, no less). Simply hiding your network doesn’t count as securing your files – a skilled hacker can easily access even an invisible network.
BAAs Are Not Enough
Google has a great reputation in the cloud-computing world, and with health organizations with high security standards. This means that medical practices using Google apps often feel confident that their files are safe, as long as they’ve signed a Business Associate Agreement (BAA).
BAA agreements might keep your information safe on an internal level, but this agreement won’t help secure patient files when transferred to other digital environments. Instead, when transferring files, using end-to-end encryption is the safest bet. This system will keep your data HIPAA compliant, even when it leaves the Google cloud.
Consider Adoption Side Effects
It’s great to choose a new HIPAA-compliant cloud system for your business, but in our pursuit of better data management systems, we often forget to consider the human elements of adopting new systems. Before choosing a new system, then, it’s important to ask whether your employees will be able to effectively use the new system, and whether there are other options they may find more convenient.
This is a common problem for companies choosing between Office 365 and Google apps for their cloud computing activity. Both Microsoft and Google will sign BAAs that offer HIPAA compliance, but the two programs have different strengths. This is where considering use and convenience is important. If you work a lot with documents, you might think that Office 365 is the way to go—most of us came of age writing everything in Word, so why not? The main reason not to, it seems, is that Google Docs’ collaboration systems are helpful and the platform is more convenient. The reverse seems to hold for spreadsheets.
If you can’t get your team on board with a new computing system, no amount of security regulation in the world will help you. Be sure to clearly to tell your staff about organizations with which you have BAAs, the legal risks of using other systems, and their responsibility to patient privacy as health field employees.
Cloud adoption is trending—and it is an inevitable choice for any enterprise that wants to stay relevant in today’s interconnected world.
The security of storing and processing critical data outside of the enterprise’s control is a central factor to the analysis of cloud adoption.
So whether your organization employs a cloud-first strategy or is still sitting on the sidelines of the cloud game, there are three key steps to understanding what risks the cloud poses to your data.
- Assess your current cloud usage. What cloud services are your users already using to do their jobs? Security leaders should sponsor a project to inspect all network traffic using a web proxy server or cloud access security broker (CASB) to fully identify your enterprise’s app consumption. The next step is differentiation between enterprise-sanctioned apps and rogue shadow IT apps. The prevalence of shadow IT is either unknown or underestimated by the IT departments at most enterprises. The mounting risks from decentralized and uncontrolled cloud service adoptions for the gamut of enterprise applications has left CIOs wondering how to best assess the extent of shadow IT services that have migrated to the cloud without any adequate control measures or oversight from IT. While these shadow IT systems may have served as a quick win to the business when implemented, the legacy impact of these cloud solutions is redundancy and an increased attack surface throughout the enterprise. As surveillance and data leakage concerns continue to haunt consumers and businesses alike, security due diligence of cloud solutions is paramount.
- Adjust your strategy to reduce cloud risk. There may be significant cost and efficiency gains possible by moving select services to the cloud. Risk reduction measures should be evaluated concurrently to securely scale your cloud adoption. Consider cloud identity management solutions for single sign-on to enable centralized access controls, including multifactor authentication options. Further, automated user provisioning will inject security into your application portfolio management. Another recommendation to security leaders is to leverage a layer 7 next-gen firewall for web traffic classification and control. This visibility will allow you to block risky, nonbusiness apps, such as peer-to-peer sharing, or restrict quasi-business apps, such as file sharing services, to only privileged users/groups with a demonstrated need.
- Plan your future cloud model. Whether your business users want to consume Software as a Service (SaaS) solutions or your IT infrastructure teams see value in Infrastructure as a Service (IaaS) offerings, there are many ways to mitigate your risks while satisfying both sides. Advanced security analytics, data context and application auditing made available by CASBs can enable deep integration into many foundational enterprise apps (Office 365, Google Apps, AWS, Azure). It is also imperative to formalize your application risk assessment when choosing between cloud-based SaaS and increasingly available on-premise SaaS solutions for those critical services that your risk managers cannot bless to the cloud. Some niche cloud service providers (e.g., Github, JIRA) also offer on-premise options to customers, and new Docker container technologies (Replicated) are now allowing vendors to offer the same SaaS experience, but delivered on-premise, in an effort to keep a better handle on enterprise data and security. In the ultimate decision of cloud adoption, your future cloud model may well be sitting behind your own firewall.
Gary Miller, CISSP, CISA, CIA, CRMA, CCSA, ITILv3
Senior Director of Information Security at TaskUs
Note: Gary Miller will present on shadow IT risk and cloud governance at ISACA’s 2016 North America CACS conference in New Orleans, 2-4 May 2016. To learn more from him and other expert presenters, register here.
Cloud computing has the ability to offer organizations long-term IT savings, reductions in infrastructural costs and pay-for-service models. By moving IT services to the cloud, organizations are more geographically distributed than ever before and the pace of business gets faster every day. Online collaboration has become a business necessity—there is no other way for distributed teams to work as quickly and efficiently as business demands. With virtual, paperless environments becoming more common, simply locking the doors at night no longer protects merchants, banks, customers or the business they conduct.
This means that exploitation will change from systems to web. Due to these changes, today’s business needs demand that applications and data not only move across physical and international borders, but also to the cloud and accessible by third parties. This loss of control is significant for security teams that must not only keep data safe, but also comply with the necessary security standards, including the Payment Card Industry Data Security Standard (PCI DSS). The payment card industry (PCI) should recognize that the most effective way to protect customer data is to protect the networks from the point of purchase to the application servers in their networks.
The PCI DSS security requirements apply to all system components included in or connected to the cardholder data environment. The cardholder data environment (CDE) is comprised of people, processes and technologies that store, process or transmit cardholder data or sensitive authentication data. “System components” include network devices, servers, computing devices and applications.
Five compliance challenges organizations may encounter are:
- The cloud is relatively new technology and may be misunderstood.
- Clients may have limited visibility into the service provider’s infrastructure and the related security controls.
- It can be challenging to verify who has access to cardholder data process, transmitted, or stored in the cloud environment.
- Public cloud environments are usually designed to allow access from anywhere on the Internet.
- Some virtual components do not have the same level of access control, logging, and monitoring as their physical counterparts.
Meeting the Compliance Requirements
Shared hosting providers must protect each customer’s hosted environment and cardholder data. From 30 June 2015, these providers must meet specific, additional requirements that are set out in an appendix A of PCI DSS Version 3. Below are a few highlights:
- PCI DSS requires that hosting providers ensure that each customer only runs processes that have access to that entity’s cardholder data environment.
- Access and privileges must be restricted to each customer’s own cardholder data environment.
- If a customer (merchant or service provider) is allowed to run its own applications on a shared server, it should run with the user ID of the customer, rather than as a privileged user.
- Logging and audit trails must also be enabled, unique to each entity’s cardholder data environment and consistent with PCI DSS requirements.
- Logs should be available to each customer, specific to their cardholder data environment.
- Processes must also be available to provide timely forensic investigation in the event of any compromise of cardholder data or systems.
- Even though a hosting provider meets PCI DSS requirements, the compliance of the customer using the service is not guaranteed.
- Each entity will need to comply with PCI DSS and validate its own compliance as applicable.
PCI DSS compliance is mandatory for banks, merchants and providers that process, transmit or store cardholder data. The risk of noncompliance is substantial, including fines, potential security breaches and loss of business.
Any enterprise that falls with the scope of the standard must implement it and seek compliance. Merchants who fail to comply might be forced to pay an extra percentage for noncompliance. There are also fines for storing sensitive authentication data, which is not allowed under the standard. Penalties for data breaches in noncompliant companies can be severe, including large fines as well as the threat of future exclusion from the payment card network.
Adesanya Ahmed, CRISC, CGEIT, ACPA, ACMA
IT Security and Connectivity Consultant, Petrovice Resources International Ltd.