Other Blogs
There are no items in this list.
Knowledge & Insights > ISACA Now > Categories
Storing for the Future: How Data Centers Will Advance in 2020

Marty PuranikThe idea that data is an incredibly valuable resource in the modern business landscape isn’t new—but best practices for managing that data seem to change almost by the year. More than ever, enterprises leverage data centers to do their work, and savvy executives will be looking ahead in 2020 and beyond to learn how data can be managed more effectively.

Let’s consider three key questions here.

How will the advancement of AI improve the efficiency of data center technology?
Increasingly, artificial intelligence is being “baked in” to products from the get-go. A popular example of this concept would be IoT appliances—think a refrigerator that’s able to identify the items on its shelves, automatically facilitate restock orders and report on its own functioning and maintenance needs. Data center hardware can similarly benefit from AI:

  • Collecting Operational Data: IoT-empowered data centers keep track of their own systems on a more granular level, making it easy to compare actual performance with expected baselines. Data points might include temperature, battery functioning, data retrieval times and power usage.
  • Descriptive Analytics: Purpose-built analytics suites convert reams of data into useful insights—for customers and manufacturers alike.
  • Optimizing Efficiencies: AI can automatically regulate resource usage to save energy during low-usage periods, and take action when higher usage threatens to cause costly downtime.
  • Factoring in Context: There’s also the outside world to consider. By factoring in important contextual data, such as weather (which impacts cooling in each facility) and holidays (think Cyber Monday usage spikes), AI can tailor its functioning to adapt on the fly.
  • Detecting Malfunctions: Identify significant anomalies and take action to solve issues—before they become critical.
  • Anticipating Equipment Failure: AIs can project when components are likely to fail or fall below an acceptable level of efficiency. Having a clear understanding of a given piece of equipment’s natural lifecycle means having plans in place to keep things humming along.

In 2014 Google was famously able to reduce cooling costs in its data centers by a whopping 40% when it allowed its DeepMind AI to optimize functioning. Now the company is even manufacturing its own custom chipsets to squeeze out greater efficiencies and reduce the overall number of data centers the company relies on for highly resource-intensive functions like speech recognition.

How will data centers be affected by 5G wireless becoming the eventual standard?
By the end of 2020, 5G will be well on its way to becoming the new standard. This has big implications for applications like driverless vehicles, which are too data-intensive to function properly with current-generation 4G connectivity. It’s believed 5G wireless will be able to support speeds up to 100GB/s, roughly 100 times faster than 4G—which is expected to cause a permanent spike in data usage.

You may recall talk in 2016 about the internet entering the so-called Zettabyte Era when global IP traffic first exceeded one zettabyte. According to Cisco Systems, which coined the original term, 5G will bring about the Mobile Zettabyte Era. Considering the fact that the internet already consumes roughly 10% of the energy expended on Earth each year, this has massive implications for data centers.

On the one hand, the anticipated increased demand on data center hardware is already helping to spark a construction gold rush (see the next question for more on that)—a development that will benefit companies that can afford to build at hyper-scale for interconnectivity.

On the other hand, 5G offers so many potential benefits to enterprises (such as improved power efficiency; dynamic resource allocation; and massively improved support for IoT applications), that overall business for data centers should be thriving.

Will the surge in cloud data center construction make the idea of an on-premise data center obsolete for enterprises?
Data center construction is big business, with cloud companies spending over US$150 billion on new construction in the first half of 2019 alone. Does this spell doom for the on-premise server farm?

Gartner Research VP David Cappuccio certainly thinks so. In a blog post called “The Data Center is Dead,” the veteran infrastructure researcher asserts his belief that by 2025 no less than 80% of enterprises will have shut down their on-premise data centers. The crux of his argument is that most of the advantages of traditional data centers have evaporated thanks to technological advancements—notably faster data transfer and the greater operational efficiencies at hyper-scale that mammoth server farms enable.

The real tipping, though, is at the Edge.

Edge data centers are located close to customers’ physical locations, reducing latency. This improves service for more intensive needs like gaming, streaming and cloud computing. Having local nodes allows larger distributed cloud networks to also offer consistent enterprise-quality performance, even outside of high-tier regions like New York and San Francisco.

Altogether, most of the key advantages of on-premise data centers have been obviated, and those that remain have been relegated to niche functions. Today’s IT decision-makers now look for solutions based on their general business needs, such as the specific requirements for data centers in healthcare, as an example, rather than trying to force the solution to fit into their existing data architecture.

This agility helps enterprises more easily hunt for efficiencies, which will remain the hallmark of a successful company in 2020.

Big Data Analytics Powering Progress in Animal Agriculture

There has been significant progress in technologies that can be utilized in the livestock industry. These technologies will help farmers, breeders associations and other industry stakeholders in continuously monitoring and collecting animal-level and farm-level data using less labor-intensive approaches.

Specifically, we are seeing the use of fully automated data recording based on digital images, sounds, sensors, unmanned systems and real-time uninterrupted computer vision. These technologies can help farmers tremendously and have the potential to enhance product quality, well-being, management practice, sustainable development and animal health, and ultimately contribute to better human health.

These technologies, when implemented with rich molecular information such as transcriptomics, genomics, and microbiota from animals, can help achieve the long-lasting dream of implementing precision animal agriculture. What this means is, with the help technology, we will be able to better monitor and manage an individual animal with tailored information.

However, the complexity of data generated and its growing volume, by the fully automated data recording or phenotyping platforms mentioned above, leads to several hindrances in the successful implementation of precision animal agriculture.

How Machine Learning and Data Mining Helps
The growing areas of machine learning and data mining are expected to help meet the challenges faced in global agriculture.

When combined with big data, machine learning models can be used as a framework for biology. However, as mentioned above, models of highly complex data usually suffer from overfitting, when we train it with a lot of data. Overfitting is the biggest problem in the failure of naive applications with complex models.

The primary reasons for applying machine learning techniques to animal science are:

  1. To build prior knowledge for regularization with continued efforts
  2. To continuously gather data sets and integrate data sets with different modalities to increase the size of the collected samples that can be utilized for training

After collecting the data, one has to keep in mind the computational load that is required to analyze the chunks of integrated data sets. Whenever possible, one should also consider the compatibility of the model with parallel computing.

For example, GPU cloud computing services offered by Amazon AWS and Microsoft Azure might prove useful. They also provide infrastructures to secure, host and share big data. With the guidance of machine learning and data methods, one can reach the next phase of growth in big data to reconsider all characteristics of management decisions in the animal sciences.

In conclusion, precision animal agriculture is bound to rise in the livestock enterprise in the domains of production, management, welfare, health surveillance, sustainability and environmental footprint. Significant progress has been made in the utilization of tools to regularly monitor and collect information from farms and animals in a less tedious manner than before.

With these methods, the animal sciences have embarked on a journey to improve animal agriculture with information technology-driven discoveries. The problem of overfitting can be dealt with by utilizing popular cloud platforms like AWS and Azure.

About the author: Harsh Arora is a proud father of four rescued dogs and a leopard gecko. Besides being a full-time dog father, he is a freelance content writer/blogger and a massage expert who is skilled in using the best massage gun to deliver the best results.

How Responsible Are Cloud Platforms for Cloud Security?

Larry AltonThese days, just about every software platform or app available has some kind of cloud functionality. They might host your data in the cloud, give you cross-platform access to your account, or allow you to upload and download files anywhere. This is remarkably convenient, and a major breakthrough for productivity and communication in the workplace, but it also comes with its share of vulnerabilities. A security flaw could make your data available to someone with malicious intentions.

Cloud security is a complex topic that comprises many different considerations, including the physical integrity of the data center where your data is held and the coding of the software that allows you to access it. A trustworthy cloud developer should take precautions and improve cloud security the best it can—but how responsible should the developer be for ensuring the integrity of their system?

Cloud Platform Responsibilities
There are many potential points of vulnerability that could compromise the integrity of a cloud account. However, not all of them are controllable by the cloud developer—as we’ll see.

Let’s start by focusing on the areas of security that a cloud developer and service provider could feasibly control:

  • Physical data storage and integrity. Most cloud platforms rely on massive, highly secured data centers where they store user data and keep it safe. Because cloud platforms are the only ones with access to these data centers, they’re the ones responsible for keeping them secure. That often means creating redundancy, with multiple backups, and physical protective measures to guard against attacks and natural disasters.
  • Software integrity. The cloud platform is also responsible for ensuring the structural integrity of the software. There shouldn’t be any coding gaps that allow someone to forcibly enter and/or manipulate the system.
  • API, communication, and integration integrity. One of the biggest potential flaws in any app is its connection points to other users and other integrations. If the app allows message exchanges, it should be secured with end-to-end encryption. If there are any active API calls to integrate with other applications, these need to be highly secure.
  • User controls and options. It’s also important for cloud apps to include multiple options and features for users to take charge of their own security. For example, this may include the ability to create and manage multiple types of users with different administrative privileges.

Other Responsibilities
However, there are some other points of vulnerability outside the realm of a cloud provider’s direct control. For example:

  • Network encryption and security. There isn’t much a cloud platform can do if end users are relying on a public network, or one that isn’t secured with encryption and a strong password. This is a responsibility that falls squarely on the shoulders of the end user.
  • Hardware and endpoint security. While the software development process requires a developer to have some level of understanding of the hardware being used to access their apps, they’re limited in their understanding of those inherent vulnerabilities. There also isn’t much a cloud platform can do if their end users are using outdated devices, or devices with massive security flaws.
  • Password and account protection habits. It’s almost entirely the end users’ responsibilities to create strong passwords and protect their own accounts. If they end up choosing weak passwords, or if they never change those passwords, no amount of built-in security can help them. The same is true if they fall for phishing schemes, or if they voluntarily give their password to someone. Along similar lines, it’s important that a developer’s end users understand the nature of online scams, but this is generally outside the realm of their control.
  • Malware. When malware is installed on a device, it could gain access to everything else on that device, including being able to spy on actions performed within the cloud app. Unless a cloud app deliberately scans for malware, there’s no way for it to tell that it’s installed. It’s the end user’s responsibility to take preventative measures, such as avoiding suspicious download links and installing antivirus software to run occasional scans.

Even if these aren’t directly within the control of a cloud platform provider, there are steps a cloud authority can take to improve them, or mitigate their potential vulnerabilities. For example, a cloud provider can’t guarantee good password creation and adjustment habits, but they may be able to educate their users on the importance of good password habits and/or force them to update their passwords regularly.

Overall, cloud platforms should be held to high security standards, but there are limits to what they can control. Digital security in all its forms needs to be a team effort; even a single vulnerability can compromise the entire system.

Editor’s note: For additional insights on this topic, download ISACA’s white paper, Continuous Oversight in the Cloud.

In the Age of Cloud, Physical Security Still Matters

Sourya BiswasAs a security consultant, I’ve had the opportunity to assess the security postures of clients of all shapes and sizes. These enterprises have ranged in sizes from a five-man startup where all security (and information technology) was being handled by a single individual to Fortune 500 companies with standalone security departments staffed by several people handling application security, vendor security, physical security, etc. This post is based primarily on my experiences with smaller clients.

Cloud computing has definitely revolutionized the way companies do business. Not only does it allow companies to focus on core competencies by outsourcing a major part of the underlying IT infrastructure (and associated problems), it also allows for the conversion of heavy capital expenditure into scalable operational expenses that can be turned up or down on demand. The latter is especially helpful for smaller companies that can now access technologies that before had only been available to enterprises with million-dollar IT budgets.

Information security is one area where this transformation has been really impactful. With the likes of Amazon, Google and Microsoft continually updating their cloud environments and making them more secure, a lot of those security responsibilities can be handed over to the cloud providers. And this includes physical security as well, with enterprises no longer having to secure their expensive data centers.

However, this doesn’t mean that the need for physical security in the operating environment disappears. I once had a client CEO say to me, and I’m quoting him word for word – “Everything is in the cloud; why do I need physical security?” I responded, “Let’s consider a hypothetical scenario: you’re logged into your AWS admin account on your laptop and step away for a cup of coffee; I walk in and walk away with your laptop. Will that be a security issue for you?” Considering that this client had multiple entry points to its office with no receptionist, security guard or badged entry, I consider this scenario realistic instead of just hypothetical.

I’ve visited client locations, signed in on a tablet with my name and who I’m supposed to meet, the person was notified, and I was subsequently escorted in. Note that at no point in this process was I required to verify who I am. Considering the IAAA (Identification, Authentication, Authorization, Auditing) model, I provided an Identity, but it was not Authenticated. In fact, if somebody else signed in with my name, they would have gained access to the facility considering the client contact was expecting me, or rather someone with my name, to show up around that time.

Let’s look at one more example. One of my clients, dealing with sensitive chemicals, had doors alarmed and CCTV-monitored. However, they left their windows unguarded, with the result that a drug addict broke in and stole several thousand dollars’ worth of material.

Smaller companies on smaller budgets obviously want to limit their spend on security. And with their production environments in the cloud, physical security of their office environments is the last thing on their minds. However, most of them have valuable physical assets, even if they don’t realize it, that could be secured by spending minimally. Here are a few recommendations:

  • Ensure you have only a single point of entry during normal operations. Having an alarmed emergency exit is, however, highly recommended.
  • Ensure that the above point of entry is covered by a camera. If live monitoring of the feed is too expensive, ensure that the time-stamped footage is stored offsite and retained for at least three months so that it can be reviewed in case of an incident.
  • Install glass breakage alarms on windows. Put in motion sensors.
  • In addition to alarms for forced entry, an alarm should sound for a door held open for more than 30 seconds. Train employees to prevent tailgating.
  • Require employees and contractors to wear identification badges visibly.
  • Verify identity of all guests and vendors before granting entry. Print out different-colored badges and encourage employees to speak up if anyone without a badge is on the premises.
  • Establish and enforce a clear screen, clean desk and clear whiteboard policy.
  • Put shredding bins adjacent to printers. Shred contents and any unattended papers at close of business.
  • Mandate the use of laptop locks.

Please note that the above recommendations are not expensive to implement. While some are process-based requiring employee training, most require minimal investment in off-the-shelf equipment. Of course, there are varying degrees of implementation – for example, contracting with a vendor to monitor and act on alarms will cost more than just sounding the alarm.

In summary, while physical security requirements have definitely been reduced by moving to the cloud, it would be foolhardy to believe they have disappeared. This relative neglect of physical security by certain companies, and more, is the subject of my upcoming session at the ISACA Geek Week in Atlanta.

What other physical security measures do you think companies often ignore but would be easy to implement? Respond in the comments below.

Why You Need to Align Your Cloud Strategy to Business Goals

Leron ZinatullinYour company has decided to adopt the cloud – or maybe it was among the first ones that decided to rely on virtualized environments before it was even a thing. In either case, cloud security has to be managed. How do you go about that?

Before checking out vendor marketing materials in search of the perfect technology solution, let’s step back and think of it from a governance perspective. In an enterprise like yours, there are a number of business functions and departments with various level of autonomy.

Do you trust them to manage business process-specific risk or choose to relieve them from this burden by setting security control objectives and standards centrally? Or maybe something in-between?

Centralized model

Managing security centrally allows you to uniformly project your security strategy and guiding policy across all departments. This is especially useful when aiming to achieve alignment across business functions. It helps when your customers, products or services are similar across the company, but even if not, centralized governance and clear accountability may reduce duplication of work through streamlining the processes and cost-effective use of people and technology (if organized in a central pool).

If one of the departments is struggling financially or is less profitable, the centralized approach ensures that overall risk is still managed appropriately and security is not neglected. This point is especially important when considering a security incident (such as misconfigured access permissions) that may affect the whole company.

Responding to incidents, in general, may be simplified not only from a reporting perspective but also by making sure due process is followed with appropriate oversight.

There are, of course, some drawbacks. In the effort to come up with a uniform policy, you may end up in a situation where it loses its appeal and is now perceived as too high-level and out of touch with real business unit needs. The buy-in from the business stakeholders, therefore, might be challenging to achieve.

Let’s explore the alternative – the decentralized model.

Decentralized model

This approach is best applied when your company’s departments have different customers, varied needs and business models. This situation naturally calls for more granular security requirements, preferably set at the business unit level.

In this scenario, every department is empowered to develop its own set of policies and controls. These policies should be aligned with the specific business need relevant to that team. This allows for local adjustments and increased levels of autonomy. For example, upstream and downstream operations of an oil company have vastly different needs due to the nature of activities in which they are involved. Drilling and extracting raw materials from the ground is not the same as operating a petrol station, which can feel more like a retail business rather than one dominated by industrial control systems.

Another example might be a company that grew through a series of mergers and acquisitions in which the acquired companies retained a level of individuality and operated as an enterprise under the umbrella of a parent corporation. With this degree of decentralization, resource allocation is no longer managed centrally and, combined with increased buy-in, allows for greater ownership of the security program.

This model naturally has limitations. These have been highlighted when identifying the benefits of the centralized approach: potential duplication of effort, inconsistent policy framework, challenges while responding to the enterprise-wide incident, etc. But is there a way to combine the best of both worlds? Let’s explore what a hybrid model might look like.

Hybrid cloud model

A middle ground can be achieved through establishing a governance body that sets goals and objectives for the company overall and allows departments to choose the ways to achieve these targets. What are the examples of such centrally defined security outcomes? Maintaining compliance with relevant laws and regulations is an obvious one, but this point is more subtle.

The aim here is to make sure security is supporting the business objectives and strategy. Every department in the hybrid model, in turn, decides how their security efforts contribute to the overall risk reduction and better security posture.

This means setting a baseline of security controls, communicating this to all business units, and then gradually rolling out training, updating policies and setting risk, assurance and audit processes to match. While developing this baseline, however, input from various departments should be considered, as it is essential to ensure adoption.

When an overall control framework is developed, departments are asked to come up with a specific set of controls that meet their business requirements and take distinctive business unit characteristics into account. This should be followed up by gap assessment, understanding potential inconsistencies with the baseline framework.

In the context of the cloud, decentralized and hybrid models might allow different business units to choose different cloud providers based on individual needs and cost-benefit analysis. They can go further and focus on different solution types such as SaaS over IaaS.

As mentioned above, business units are free to decide on implementation methods of security controls, providing they align with the overall policy. Compliance monitoring responsibilities, however, are best shared. Business units can manage the implemented controls but should link in with the central function for reporting in order to agree on consistent metrics and remove potential bias. This approach is similar to the Three Lines of Defense employed in many organizations to effectively manage risk. This model suggests that departments themselves own and manage risk in the first instance, with security and audit and assurance functions forming second and third lines of defense, respectively.

What next?

We’ve looked at three different governance models and discussed their pros and cons in relation to cloud. Depending on the organization, the choice can be fairly obvious. It might emerge naturally from the way the company is running its operations. All you need to do is fit in the organizational culture and adopt the approach to cloud governance accordingly.

The point of this blog post, however, is to encourage you to consider security in the business context. Don’t just select a governance model based on what sounds good or what you’ve done in the past. Instead, analyze the company, talk to people, see what works, and be ready to adjust the course of action.

If the governance structure chosen is wrong or, worse still, undefined, this can stifle the business instead of enabling it. And believe me, that’s the last thing you want to do.

Be prepared to listen: the decision to choose one of the above models doesn’t have to be final. It can be adjusted as part of the continuous improvement and feedback cycle. It always, however, has to be aligned with business needs.

5G and AI: A Potentially Potent Combination

Kris SeeburnLast week’s US State of the Union address by President Donald J. Trump promised legislation to invest in “the cutting edge industries of the future.” Without much detail initially available, the White House filled in the blanks by suggesting “President Trump’s commitment to American leadership in Artificial Intelligence, 5G wireless, quantum science and advanced manufacturing will ensure that these technologies serve to benefit the American people and that the American innovation ecosystem remains the envy of the world for generations to come.”

This comes at a time when countries such as China have really taken a leap forward on these technologies, with Chinese telecommunications company Huawei making especially notable strides. On a global level, we need to understand that 5G stands at the crossroads of speed that will change the processing capabilities for AI and will narrow the gap between processing in the cloud versus on devices. It also is going to be a major contributor to driving centralized processing.

5G makes the debate around AI edge computing irrelevant. Imagine the speed in gigabits that 5G can deliver in terms of bandwidth, millisecond latencies and reliable connections. The network architecture easily supports AI processing and will change the AI landscape.

To provide some context, it is important to recognize how 5G and AI are embedded together. 5G is described as the next-generation mobile communication tech of the near future and will enhance the speed and integration of various other technologies. This will be driven by speed, quality of service, reliability and so much more that it can do to transform the current way we use the internet and its related services.

On the other end, we need to understand that AI is poised to allow machines and systems to function with intelligence levels similar to that of humans. With 5G helping in the background online simulations for analysis, reasoning, data fitting, clustering and optimizations, AI will become more reliable and accessible at the speed of light. Imagine that once you have trained your systems to perform certain tasks, performing analysis will become automatic and faster while costing far less.

Put simply, 5G speeds up the services that you may have on the cloud, an effect similar to being local to the service. AI gets to analyze the same data faster and can learn faster to be able to develop according to users’ needs.

5G also promises significant breakthroughs in traditional mobile communications systems. 5G is going to enhance the capabilities of our traditional networks. Even the speed we get over wire or fiber goes much further over a 5G network and evolves to support the applications of IoT in various fields, including business, manufacturing, healthcare and transportation. 5G will serve as the basic technology for future IoT technologies that connect and operate entire organizations, the aim being to support differentiated applications with a uniform technical framework.

However, with rapid development, AI is rising to these challenges as it becomes a promising potential support to the problems associated with the 5G era, and will lead to revolutionary concepts and capabilities in communications. This will also “up” the game in the applications world as business requirements become more prevalent. As mentioned, the narrowing gap between cloud and on-device processing will be foregone. The reinforcing of the massive IoT network dream will become more feasible.

In reality, 5G will take some time to have significant impact on AI processing. In the meantime, as AI applications are being integrated into devices, rather than waiting for 5G to be deployed, there seems to be a safe strategy to rely on device-based processing of AI. However, one thing is for sure: the push is to have 5G and AI integration happen on the same chips on your mobile smartphones, making those phones more intelligent as well.

The question now is are we ready to see this happen? Well, it already is beginning to unfold in some countries around the world, with China leading the pack. The smartphone arena seems to be especially competitive, which can force earlier adoption and change of networks. Be ready, from the security to assurance lanes, as we will need to re-adapt ourselves to those very standards sooner than later.

Before You Commit to a Vendor, Consider Your Exit Strategy

Baan AlsinawiVendor lock-in. What is it? Vendor lock-in occurs when you adopt a product or service for your business, and then find yourself locked in, unable to easily transition to a competitor's product or service. Vendor lock-in is becoming more prevalent as we migrate from legacy IT models to the plethora of sophisticated cloud services offering rapid scalability and elasticity, while fueling creativity and minimizing costs.

However, as we rush to take advantage of what the cloud has to offer, we should plan strategically for vendor lock-in. What happens if you find another cloud provider that you prefer? How will you migrate your services? What are the costs, how disruptive will it be, and will you have the professional talent to transition successfully?

As a vendor, locking in customers by ensuring that they cannot easily transition elsewhere is smart business. However, as a buyer looking for innovative solutions and a better value for services, you require flexibility if your business needs change, or if a vendor is no longer available due to bankruptcy or restructuring.

As you adopt a growing array of cloud-based anything-as-a-service (XaaS) to outsource your business support functions—from Salesforce to AWS services, Google docs to Microsoft Office 365—consider your exit strategy if your business needs change, or your vendor is no longer available.

Take a step back and consider vendor lock-in as part of your overall risk management strategy. A single cloud provider can offer great options for redundancy, risk management and design innovation. But what happens when you consider redundancy across multiple providers? How easy is it to have a primary service on AWS and a secondary/backup on Google? Not so easy.

Best practices suggest that you shouldn’t put all your eggs in one basket. However, developing a SaaS solution designed to work on two disparate cloud services is a complex undertaking. If you are simply using the cloud for storage/raw data backup, you may be able to transfer data between providers. Even then, you need to pay attention to data structures and standards across platforms. When developing complex solutions that rely on outsourced technologies such as AWS continuous development/continuous integration (CD/CI), Splunk Cloud for auditing, or Qualys Cloud for vulnerability scanning, how much redundancy and portability are you baking into your risk management strategy?

Also, what happens if AWS is no longer available? This seems highly unlikely today, with their stocks hovering at around US $2K a share. But what if your new CIO decides Azure offers better widgets? Or your CISO wants a primary platform on AWS and a backup on Oracle? There are vast differences in these platforms, and one development effort will not be easily portable to the other.

For example, TalaTek is developing its own next-generation cloud-based solution for its current platform. We must consider the additional time, multiple developers and increased complexity required to operate on two different cloud platforms to manage this risk. The question we ask is can we afford not to plan for an exit strategy if our strategic business goals were to change?

Acknowledging the risk, and in some cases accepting it, is a key aspect of risk management. TalaTek has accepted the risk in adopting a single cloud platform, since it makes business sense to do so.

What should you consider when adopting cloud-based services? Here are our top five considerations:

  • Have a resilient risk management strategy that requires you to continuously re-evaluate your risk assumptions and diligently monitor market changes.
  • Negotiate strong service-level agreements, vetted by legal experts, in the design of your cloud strategy.
  • Align your business and IT/cloud strategies to protect your investments and ensure continuity of operations.
  • Where possible, use open source stacks and standard API structure to provide portability and interoperability.
  • Consider whether your risk tolerance allows you to accept some risk. If you are offering a SaaS solution to manage your client’s CRM, your risk tolerance risk is different from that of a hospital using the cloud to manage all of its client health data.

The cloud is here to stay. Assess your options, be smart about your strategy, and consider your exit options as you embark on the exciting journey into the cloud.

FedRAMP: Friend or Foe for Cloud Security?

Baan AlsinawiCloud security is on everyone’s minds these days. You can’t go a day without reading about an organization either planning its move to the cloud or actively deploying a cloud-based architecture. A great example is the latest news about the US Department of Defense and its ongoing move to the cloud.

The US government is leading the charge by encouraging the private sector to provide secure cloud service offerings that enable federal agencies to adopt the cloud-first policy (established by the Office of Management and Budget in 2016) using FedRAMP. FedRAMP is a US government-wide approach for security assessment, authorization and continuous monitoring for cloud products and services. It sets a high bar for compliance with standards that ensure effective risk management of cloud systems used by the federal government.

There is even some chatter now about efforts to establish FedRAMP as a law, in an effort to encourage agencies to adopt the cloud at a more rapid pace. The delay in adoption is by no small measure related to the complexity, the intensive resource requirements of the current FedRAMP processes and finding providers that are FedRAMP-certified. 

One of the main considerations to the adoption of FedRAMP on a wider scale is the difficulty for the industry, Third Party Assessment Organization (3PAO) and Cloud Service Providers (CSP) to determine what the profitability model is for engaging in the FedRAMP program.

Establishing such metrics can offer key drivers for industry adoption, perhaps by allowing CSPs to determine how offering FedRAMP-accredited IaaS/SaaS/PaaS can be truly beneficial and profitable for the company’s bottom line, at the same time allowing the agencies to determine the cost effectiveness of a move to the cloud.

While achieving FedRAMP accreditation has many challenges (as TalaTek learned over the past 18 months during deployment of its own cloud-based solution), there are clear benefits for the federal agencies and the industry to work with a FedRAMP-authorized service providers. At a high level, these include an established trust in the effectiveness of implemented controls and improvement of data protection measures. 

Despite the many challenges for adoption, I am a big believer in the benefits outweighing the challenges of the FedRAMP program, especially in the long run, after the kinks are ironed out and the program maturity improves through increased adoption of both government and private industry.

The FedRAMP program provides significant value by increasing protection of data in a consistent and efficient manner – a key need among government organizations and especially among information sharing agencies – by providing these key benefits: 

  • Enables a more successful move to the cloud for federal agencies;
  • Ensures a minimum security baseline for all cloud services; 
  • Provides managed security continuity for a cloud offering versus a onetime compliance activity;
  • Standardizes requirements for all cloud service providers; and
  • Creates a 3PAO cadre that is capable, certified and can ensure quality assurance for cloud implementations.

By providing a unified, government-wide framework for managing risk, FedRAMP overcomes the downside of duplication of effort and inefficiency associated with existing federal assessment and authorization processes.

When considering a move to the cloud and the level of security that is necessary, we should all take risk management seriously and invest in skill development and knowledge, as well as in adapting the processes for the 21st century and getting ready for the reality of the dominance of the cloud in our near future. FedRAMP provides the roadmap for any organization to achieve these goals.

Key Takeaways from a Recent Cloud Training

Adam KohnkeI recently began taking my first crack at auditing an Amazon cloud platform that comprises over a dozen managed services. While I was excited to add this new wrinkle to my skill set, I had no idea where to get started on identifying key risks applicable to each service or how to approach the engagement. Searching online eventually led me to the AWS training and certification website. My intuition initially suggested to me that Amazon was not very likely to help me audit their services, or even if they did, there probably would not be much free information available that I could leverage to obtain sufficient understanding of the service architecture or operation. Well, I was dead wrong!

This blog post covers my experience with the free Cloud Practitioner Essentials course offered by Amazon and some of key takeaways I obtained from various sections in this training.

Topics covered and use for auditors
The course takes about eight hours to complete, divided among five major sections (Cloud Concepts, Core Services, Architecture, Security, Pricing and Support). If if you can forgive Amazon’s “pointing two thumbs at itself” advertising tone, you can start picking up key risks and audit focus areas as you march on. My key takeaway from the Core service overview portion of the course was about Trusted Advisor, which is a management tool auditors can utilize to obtain a very quick glance at how well their organization is utilizing the collection of services. Trusted Advisor highlights whether cost optimization of services is being achieved while providing recommendations on how to improve service usage. It also reports on whether performance opportunities exist, if major security flaws are present, such as not utilizing multifactor authentication on accounts, and the degree of fault tolerance that exists across your compute instances. Trusted Advisor is not the be-all, end-all as it uses a limited set of checks, but provides a quick health check against the environment. A list of Identity & Account Management (IAM) best practices were also provided along with details on how to validate them.

Security, shared responsibility and suggested best practices
Amazon does a good job of pounding the shared responsibility security concept into your head for managing its services and touches on this concept throughout the training. Amazon is responsible for Security of the Cloud (physical security, hardware/firmware patching, DR, etc.) and the customer is responsible for Security in the cloud (controlling access to data and services, firewall rules, etc.). My key takeaway from this portion of the training centered on some of the best security practices for managing your root user accounts, issued to you upon establishing web services with Amazon. Root accounts are the superuser accounts that have complete access and control over every managed service in your portfolio. Because you cannot restrict the abilities of the root account, Amazon recommends deleting the access keys associated to the root account, establishing an IAM administrative user account, granting that account necessary admin permissions and using that account for managing services and security.

Detailed and helpful audit resources
The pricing and support section of the training provided very useful metrics that financial auditors and purchasing personnel should be interested in to help them determine the cost and efficiency of managed services. This portion of the course covers fundamental cost drivers (compute, storage and data transfer out) and very specific cost considerations for each fundamental cost driver, such as clock time per instance, level of monitoring, etc.

My key takeaway from this portion of the course was the overview of the support options – more  specifically the audit white papers that are maintained and made available at no cost. There are very specific security audit guides, best practices for security, operational checklists and other information that will allow an auditor to build an accurate and useful engagement program. 

This course led me to the Security Fundamentals course, which provides enhanced audit focus on several topics of interest to IT auditors, including access management, logging, encryption and network security.

In conclusion, I am surprised and extremely impressed with the amount and depth of free resources Amazon makes available to the auditing community regarding its services. I wish other providers would take similar measures to assist the information security community with understanding what is important and how to gain assurance on its secure operations.

Editor’s note: View ISACA’s audit/assurance programs here.

The Impact of Net Neutrality on Cloud Computing

Marty PuranikThe US Federal Communications Commission (FCC) recently repealed the net neutrality guidelines that it implemented less than three years ago. There has been much discussion, speculation and concern about how that move will impact the Internet, small business and consumers. Many people have suggested that one effect of the repeal will be that video streaming and other cloud hosted, web-delivered media will start to cost much more for the consumer.

It became unlawful for broadband providers to decide to slow down or block certain web traffic when the 2015 regulations were enacted by the FCC. Actually, the 2015 rules did not incorporate enterprise web access, which is often custom. They did, however, safeguard the flow of data to small businesses.

FCC chair Ajit Pai and other lawmakers take the position that the policies and practices of net neutrality are unnecessary rules which make it less likely that people will invest in broadband networks, placing an unfair strain on internet service providers (ISPs). That perspective does not seem to be aligned with that of the public, according to a poll from the University of Maryland showing that 83% of voters favored keeping the net neutrality rules in place.

What is net neutrality, exactly?
The basic notion behind the concept of net neutrality, according to a report by Don Sheppard in IT World Canada, is that the government should ensure that both all bits of data and all information providers are treated in the same way.

Net neutrality makes it illegal to have paid priority traffic, throttle, block, or perform similar tasks (see below).

Sheppard noted that there are two basic technical principles related to internet standards that are part of the basis for net neutrality:

  • End-to-end principle – Functions related to applications should not take place at intermediate nodes but instead at endpoints within networks used for general purposes.
  • Best efforts delivery – There can be no performance guarantee but instead a demand for best efforts for equal, nondiscriminatory packet delivery.

IoT: harder for startups to compete?
Growth of the Internet of Things (IoT) is closely connected to the expansion of cloud computing – since the former standard uses the latter as its backend. In terms of impact on the IoT, Nick Brown noted in IOT For All that the repeal of net neutrality will result in an uneven playing field in which it will become more difficult for smaller organizations, while larger firms will be able to form tighter relationships with ISPs.

The issue of greater latency is key to the removal of net neutrality, because latency could arise as sites are throttled (decelerated). The reason that throttling would occur between one device and another is that ISPs may want some devices (perhaps ones they build themselves) to have better performance than others.

Some people think the impact of the net neutrality repeal on the IoT will be relatively minor. However, many thought leaders think there will be a significant effect since IoT devices rely so heavily on real-time analysis.

Entering a pay-to-play era
Throttling, or slowing throughput, could occur with video streaming services and other sites. Individual cloud services could be throttled. Enterprises could have difficulty with apps that they host in their own data centers, too, since those apps require a fast internet connection to function as well.

There will be a pay-to-play scenario for web traffic instead of just using bandwidth to set prices, according to Forrester infrastructure analyst Sophia Vargas.

There is competition between wired and wireless services that has resulted from changes to their pricing models following the repeal of net neutrality, said Vargas. The pricing is per bandwidth for wired, landline services, while it is per data for wireless services. Wireless services will have the most difficulty because wired services are controlled by a smaller number of ISPs.

There will be more negotiations and volatility in the wireless than in the wired market, noted Vargas. Competition is occurring “in the ability for enterprises to essentially own or get more dedicated [wired] circuits for themselves to guarantee the quality of service on the backend,” she added.

Does net neutrality really matter?
The extent to which people are committing themselves to one side or the other gives a sense of how critical net neutrality is from a political, commercial and technical perspective. A consumer should be aware of the potential for companies to mistreat them without these protections in place (which is not to say those abuses will occur).

Ways that ISPs could perform in a manner that go against the precepts of net neutrality are:

  • Throttling – Some services or sites could be treated with slower or faster speeds.
  • Blocking – Getting to the services or sites of competitors to the ISP could become impossible because those sites are blocked.
  • Paid prioritization – Certain websites, such as social media powerhouses, could pay to get better performance (in reliability and speed) than is granted to competitors that may not have the same capital to influence the ISP.
  • Cross-subsidization – This process occurs when a provider offers discounted or free access to additional services.
  • Redirection – Web traffic is sent from a user's intended site to a competitor’s site.

Rethinking mobile apps
Another aspect of technology that will need to be rethought in the post-repeal world is improving efficiency by developing less resource-intensive mobile apps that are delivered through more geographically distributed infrastructure. Local caching could also help, and delivery of apps that serve video and images should potentially be restructured.

You can already look at file size to create better balance in the way you deliver video and images to mobile users. However, the rendering, quantity that is stream-loaded (to avoid additional pings), and other aspects are optimized with net neutrality as a given.

Providers of content delivery networks (CDNs) will need to re-strategize the methods they use to optimize enterprise traffic.

Cost has been relatively controlled in the past, according to Vargas. There is an arena of performance management and wireless area network (WAN) optimization software that was created to manage speed and reliability for data centers and mobile. Those applications will no longer work correctly because they were engineered with traffic equality as a defining principle. Hence, providers will have to adapt to meet the guidelines of the new paradigm.

1 - 10 Next