Other Blogs
There are no items in this list.
Knowledge & Insights > ISACA Now > Categories
Why You Need to Align Your Cloud Strategy to Business Goals

Leron ZinatullinYour company has decided to adopt the cloud – or maybe it was among the first ones that decided to rely on virtualized environments before it was even a thing. In either case, cloud security has to be managed. How do you go about that?

Before checking out vendor marketing materials in search of the perfect technology solution, let’s step back and think of it from a governance perspective. In an enterprise like yours, there are a number of business functions and departments with various level of autonomy.

Do you trust them to manage business process-specific risk or choose to relieve them from this burden by setting security control objectives and standards centrally? Or maybe something in-between?

Centralized model

Managing security centrally allows you to uniformly project your security strategy and guiding policy across all departments. This is especially useful when aiming to achieve alignment across business functions. It helps when your customers, products or services are similar across the company, but even if not, centralized governance and clear accountability may reduce duplication of work through streamlining the processes and cost-effective use of people and technology (if organized in a central pool).

If one of the departments is struggling financially or is less profitable, the centralized approach ensures that overall risk is still managed appropriately and security is not neglected. This point is especially important when considering a security incident (such as misconfigured access permissions) that may affect the whole company.

Responding to incidents, in general, may be simplified not only from a reporting perspective but also by making sure due process is followed with appropriate oversight.

There are, of course, some drawbacks. In the effort to come up with a uniform policy, you may end up in a situation where it loses its appeal and is now perceived as too high-level and out of touch with real business unit needs. The buy-in from the business stakeholders, therefore, might be challenging to achieve.

Let’s explore the alternative – the decentralized model.

Decentralized model

This approach is best applied when your company’s departments have different customers, varied needs and business models. This situation naturally calls for more granular security requirements, preferably set at the business unit level.

In this scenario, every department is empowered to develop its own set of policies and controls. These policies should be aligned with the specific business need relevant to that team. This allows for local adjustments and increased levels of autonomy. For example, upstream and downstream operations of an oil company have vastly different needs due to the nature of activities in which they are involved. Drilling and extracting raw materials from the ground is not the same as operating a petrol station, which can feel more like a retail business rather than one dominated by industrial control systems.

Another example might be a company that grew through a series of mergers and acquisitions in which the acquired companies retained a level of individuality and operated as an enterprise under the umbrella of a parent corporation. With this degree of decentralization, resource allocation is no longer managed centrally and, combined with increased buy-in, allows for greater ownership of the security program.

This model naturally has limitations. These have been highlighted when identifying the benefits of the centralized approach: potential duplication of effort, inconsistent policy framework, challenges while responding to the enterprise-wide incident, etc. But is there a way to combine the best of both worlds? Let’s explore what a hybrid model might look like.

Hybrid cloud model

A middle ground can be achieved through establishing a governance body that sets goals and objectives for the company overall and allows departments to choose the ways to achieve these targets. What are the examples of such centrally defined security outcomes? Maintaining compliance with relevant laws and regulations is an obvious one, but this point is more subtle.

The aim here is to make sure security is supporting the business objectives and strategy. Every department in the hybrid model, in turn, decides how their security efforts contribute to the overall risk reduction and better security posture.

This means setting a baseline of security controls, communicating this to all business units, and then gradually rolling out training, updating policies and setting risk, assurance and audit processes to match. While developing this baseline, however, input from various departments should be considered, as it is essential to ensure adoption.

When an overall control framework is developed, departments are asked to come up with a specific set of controls that meet their business requirements and take distinctive business unit characteristics into account. This should be followed up by gap assessment, understanding potential inconsistencies with the baseline framework.

In the context of the cloud, decentralized and hybrid models might allow different business units to choose different cloud providers based on individual needs and cost-benefit analysis. They can go further and focus on different solution types such as SaaS over IaaS.

As mentioned above, business units are free to decide on implementation methods of security controls, providing they align with the overall policy. Compliance monitoring responsibilities, however, are best shared. Business units can manage the implemented controls but should link in with the central function for reporting in order to agree on consistent metrics and remove potential bias. This approach is similar to the Three Lines of Defense employed in many organizations to effectively manage risk. This model suggests that departments themselves own and manage risk in the first instance, with security and audit and assurance functions forming second and third lines of defense, respectively.

What next?

We’ve looked at three different governance models and discussed their pros and cons in relation to cloud. Depending on the organization, the choice can be fairly obvious. It might emerge naturally from the way the company is running its operations. All you need to do is fit in the organizational culture and adopt the approach to cloud governance accordingly.

The point of this blog post, however, is to encourage you to consider security in the business context. Don’t just select a governance model based on what sounds good or what you’ve done in the past. Instead, analyze the company, talk to people, see what works, and be ready to adjust the course of action.

If the governance structure chosen is wrong or, worse still, undefined, this can stifle the business instead of enabling it. And believe me, that’s the last thing you want to do.

Be prepared to listen: the decision to choose one of the above models doesn’t have to be final. It can be adjusted as part of the continuous improvement and feedback cycle. It always, however, has to be aligned with business needs.

5G and AI: A Potentially Potent Combination

Kris SeeburnLast week’s US State of the Union address by President Donald J. Trump promised legislation to invest in “the cutting edge industries of the future.” Without much detail initially available, the White House filled in the blanks by suggesting “President Trump’s commitment to American leadership in Artificial Intelligence, 5G wireless, quantum science and advanced manufacturing will ensure that these technologies serve to benefit the American people and that the American innovation ecosystem remains the envy of the world for generations to come.”

This comes at a time when countries such as China have really taken a leap forward on these technologies, with Chinese telecommunications company Huawei making especially notable strides. On a global level, we need to understand that 5G stands at the crossroads of speed that will change the processing capabilities for AI and will narrow the gap between processing in the cloud versus on devices. It also is going to be a major contributor to driving centralized processing.

5G makes the debate around AI edge computing irrelevant. Imagine the speed in gigabits that 5G can deliver in terms of bandwidth, millisecond latencies and reliable connections. The network architecture easily supports AI processing and will change the AI landscape.

To provide some context, it is important to recognize how 5G and AI are embedded together. 5G is described as the next-generation mobile communication tech of the near future and will enhance the speed and integration of various other technologies. This will be driven by speed, quality of service, reliability and so much more that it can do to transform the current way we use the internet and its related services.

On the other end, we need to understand that AI is poised to allow machines and systems to function with intelligence levels similar to that of humans. With 5G helping in the background online simulations for analysis, reasoning, data fitting, clustering and optimizations, AI will become more reliable and accessible at the speed of light. Imagine that once you have trained your systems to perform certain tasks, performing analysis will become automatic and faster while costing far less.

Put simply, 5G speeds up the services that you may have on the cloud, an effect similar to being local to the service. AI gets to analyze the same data faster and can learn faster to be able to develop according to users’ needs.

5G also promises significant breakthroughs in traditional mobile communications systems. 5G is going to enhance the capabilities of our traditional networks. Even the speed we get over wire or fiber goes much further over a 5G network and evolves to support the applications of IoT in various fields, including business, manufacturing, healthcare and transportation. 5G will serve as the basic technology for future IoT technologies that connect and operate entire organizations, the aim being to support differentiated applications with a uniform technical framework.

However, with rapid development, AI is rising to these challenges as it becomes a promising potential support to the problems associated with the 5G era, and will lead to revolutionary concepts and capabilities in communications. This will also “up” the game in the applications world as business requirements become more prevalent. As mentioned, the narrowing gap between cloud and on-device processing will be foregone. The reinforcing of the massive IoT network dream will become more feasible.

In reality, 5G will take some time to have significant impact on AI processing. In the meantime, as AI applications are being integrated into devices, rather than waiting for 5G to be deployed, there seems to be a safe strategy to rely on device-based processing of AI. However, one thing is for sure: the push is to have 5G and AI integration happen on the same chips on your mobile smartphones, making those phones more intelligent as well.

The question now is are we ready to see this happen? Well, it already is beginning to unfold in some countries around the world, with China leading the pack. The smartphone arena seems to be especially competitive, which can force earlier adoption and change of networks. Be ready, from the security to assurance lanes, as we will need to re-adapt ourselves to those very standards sooner than later.

Before You Commit to a Vendor, Consider Your Exit Strategy

Baan AlsinawiVendor lock-in. What is it? Vendor lock-in occurs when you adopt a product or service for your business, and then find yourself locked in, unable to easily transition to a competitor's product or service. Vendor lock-in is becoming more prevalent as we migrate from legacy IT models to the plethora of sophisticated cloud services offering rapid scalability and elasticity, while fueling creativity and minimizing costs.

However, as we rush to take advantage of what the cloud has to offer, we should plan strategically for vendor lock-in. What happens if you find another cloud provider that you prefer? How will you migrate your services? What are the costs, how disruptive will it be, and will you have the professional talent to transition successfully?

As a vendor, locking in customers by ensuring that they cannot easily transition elsewhere is smart business. However, as a buyer looking for innovative solutions and a better value for services, you require flexibility if your business needs change, or if a vendor is no longer available due to bankruptcy or restructuring.

As you adopt a growing array of cloud-based anything-as-a-service (XaaS) to outsource your business support functions—from Salesforce to AWS services, Google docs to Microsoft Office 365—consider your exit strategy if your business needs change, or your vendor is no longer available.

Take a step back and consider vendor lock-in as part of your overall risk management strategy. A single cloud provider can offer great options for redundancy, risk management and design innovation. But what happens when you consider redundancy across multiple providers? How easy is it to have a primary service on AWS and a secondary/backup on Google? Not so easy.

Best practices suggest that you shouldn’t put all your eggs in one basket. However, developing a SaaS solution designed to work on two disparate cloud services is a complex undertaking. If you are simply using the cloud for storage/raw data backup, you may be able to transfer data between providers. Even then, you need to pay attention to data structures and standards across platforms. When developing complex solutions that rely on outsourced technologies such as AWS continuous development/continuous integration (CD/CI), Splunk Cloud for auditing, or Qualys Cloud for vulnerability scanning, how much redundancy and portability are you baking into your risk management strategy?

Also, what happens if AWS is no longer available? This seems highly unlikely today, with their stocks hovering at around US $2K a share. But what if your new CIO decides Azure offers better widgets? Or your CISO wants a primary platform on AWS and a backup on Oracle? There are vast differences in these platforms, and one development effort will not be easily portable to the other.

For example, TalaTek is developing its own next-generation cloud-based solution for its current platform. We must consider the additional time, multiple developers and increased complexity required to operate on two different cloud platforms to manage this risk. The question we ask is can we afford not to plan for an exit strategy if our strategic business goals were to change?

Acknowledging the risk, and in some cases accepting it, is a key aspect of risk management. TalaTek has accepted the risk in adopting a single cloud platform, since it makes business sense to do so.

What should you consider when adopting cloud-based services? Here are our top five considerations:

  • Have a resilient risk management strategy that requires you to continuously re-evaluate your risk assumptions and diligently monitor market changes.
  • Negotiate strong service-level agreements, vetted by legal experts, in the design of your cloud strategy.
  • Align your business and IT/cloud strategies to protect your investments and ensure continuity of operations.
  • Where possible, use open source stacks and standard API structure to provide portability and interoperability.
  • Consider whether your risk tolerance allows you to accept some risk. If you are offering a SaaS solution to manage your client’s CRM, your risk tolerance risk is different from that of a hospital using the cloud to manage all of its client health data.

The cloud is here to stay. Assess your options, be smart about your strategy, and consider your exit options as you embark on the exciting journey into the cloud.

FedRAMP: Friend or Foe for Cloud Security?

Baan AlsinawiCloud security is on everyone’s minds these days. You can’t go a day without reading about an organization either planning its move to the cloud or actively deploying a cloud-based architecture. A great example is the latest news about the US Department of Defense and its ongoing move to the cloud.

The US government is leading the charge by encouraging the private sector to provide secure cloud service offerings that enable federal agencies to adopt the cloud-first policy (established by the Office of Management and Budget in 2016) using FedRAMP. FedRAMP is a US government-wide approach for security assessment, authorization and continuous monitoring for cloud products and services. It sets a high bar for compliance with standards that ensure effective risk management of cloud systems used by the federal government.

There is even some chatter now about efforts to establish FedRAMP as a law, in an effort to encourage agencies to adopt the cloud at a more rapid pace. The delay in adoption is by no small measure related to the complexity, the intensive resource requirements of the current FedRAMP processes and finding providers that are FedRAMP-certified. 

One of the main considerations to the adoption of FedRAMP on a wider scale is the difficulty for the industry, Third Party Assessment Organization (3PAO) and Cloud Service Providers (CSP) to determine what the profitability model is for engaging in the FedRAMP program.

Establishing such metrics can offer key drivers for industry adoption, perhaps by allowing CSPs to determine how offering FedRAMP-accredited IaaS/SaaS/PaaS can be truly beneficial and profitable for the company’s bottom line, at the same time allowing the agencies to determine the cost effectiveness of a move to the cloud.

While achieving FedRAMP accreditation has many challenges (as TalaTek learned over the past 18 months during deployment of its own cloud-based solution), there are clear benefits for the federal agencies and the industry to work with a FedRAMP-authorized service providers. At a high level, these include an established trust in the effectiveness of implemented controls and improvement of data protection measures. 

Despite the many challenges for adoption, I am a big believer in the benefits outweighing the challenges of the FedRAMP program, especially in the long run, after the kinks are ironed out and the program maturity improves through increased adoption of both government and private industry.

The FedRAMP program provides significant value by increasing protection of data in a consistent and efficient manner – a key need among government organizations and especially among information sharing agencies – by providing these key benefits: 

  • Enables a more successful move to the cloud for federal agencies;
  • Ensures a minimum security baseline for all cloud services; 
  • Provides managed security continuity for a cloud offering versus a onetime compliance activity;
  • Standardizes requirements for all cloud service providers; and
  • Creates a 3PAO cadre that is capable, certified and can ensure quality assurance for cloud implementations.

By providing a unified, government-wide framework for managing risk, FedRAMP overcomes the downside of duplication of effort and inefficiency associated with existing federal assessment and authorization processes.

When considering a move to the cloud and the level of security that is necessary, we should all take risk management seriously and invest in skill development and knowledge, as well as in adapting the processes for the 21st century and getting ready for the reality of the dominance of the cloud in our near future. FedRAMP provides the roadmap for any organization to achieve these goals.

Key Takeaways from a Recent Cloud Training

Adam KohnkeI recently began taking my first crack at auditing an Amazon cloud platform that comprises over a dozen managed services. While I was excited to add this new wrinkle to my skill set, I had no idea where to get started on identifying key risks applicable to each service or how to approach the engagement. Searching online eventually led me to the AWS training and certification website. My intuition initially suggested to me that Amazon was not very likely to help me audit their services, or even if they did, there probably would not be much free information available that I could leverage to obtain sufficient understanding of the service architecture or operation. Well, I was dead wrong!

This blog post covers my experience with the free Cloud Practitioner Essentials course offered by Amazon and some of key takeaways I obtained from various sections in this training.

Topics covered and use for auditors
The course takes about eight hours to complete, divided among five major sections (Cloud Concepts, Core Services, Architecture, Security, Pricing and Support). If if you can forgive Amazon’s “pointing two thumbs at itself” advertising tone, you can start picking up key risks and audit focus areas as you march on. My key takeaway from the Core service overview portion of the course was about Trusted Advisor, which is a management tool auditors can utilize to obtain a very quick glance at how well their organization is utilizing the collection of services. Trusted Advisor highlights whether cost optimization of services is being achieved while providing recommendations on how to improve service usage. It also reports on whether performance opportunities exist, if major security flaws are present, such as not utilizing multifactor authentication on accounts, and the degree of fault tolerance that exists across your compute instances. Trusted Advisor is not the be-all, end-all as it uses a limited set of checks, but provides a quick health check against the environment. A list of Identity & Account Management (IAM) best practices were also provided along with details on how to validate them.

Security, shared responsibility and suggested best practices
Amazon does a good job of pounding the shared responsibility security concept into your head for managing its services and touches on this concept throughout the training. Amazon is responsible for Security of the Cloud (physical security, hardware/firmware patching, DR, etc.) and the customer is responsible for Security in the cloud (controlling access to data and services, firewall rules, etc.). My key takeaway from this portion of the training centered on some of the best security practices for managing your root user accounts, issued to you upon establishing web services with Amazon. Root accounts are the superuser accounts that have complete access and control over every managed service in your portfolio. Because you cannot restrict the abilities of the root account, Amazon recommends deleting the access keys associated to the root account, establishing an IAM administrative user account, granting that account necessary admin permissions and using that account for managing services and security.

Detailed and helpful audit resources
The pricing and support section of the training provided very useful metrics that financial auditors and purchasing personnel should be interested in to help them determine the cost and efficiency of managed services. This portion of the course covers fundamental cost drivers (compute, storage and data transfer out) and very specific cost considerations for each fundamental cost driver, such as clock time per instance, level of monitoring, etc.

My key takeaway from this portion of the course was the overview of the support options – more  specifically the audit white papers that are maintained and made available at no cost. There are very specific security audit guides, best practices for security, operational checklists and other information that will allow an auditor to build an accurate and useful engagement program. 

This course led me to the Security Fundamentals course, which provides enhanced audit focus on several topics of interest to IT auditors, including access management, logging, encryption and network security.

In conclusion, I am surprised and extremely impressed with the amount and depth of free resources Amazon makes available to the auditing community regarding its services. I wish other providers would take similar measures to assist the information security community with understanding what is important and how to gain assurance on its secure operations.

Editor’s note: View ISACA’s audit/assurance programs here.

The Impact of Net Neutrality on Cloud Computing

Marty PuranikThe US Federal Communications Commission (FCC) recently repealed the net neutrality guidelines that it implemented less than three years ago. There has been much discussion, speculation and concern about how that move will impact the Internet, small business and consumers. Many people have suggested that one effect of the repeal will be that video streaming and other cloud hosted, web-delivered media will start to cost much more for the consumer.

It became unlawful for broadband providers to decide to slow down or block certain web traffic when the 2015 regulations were enacted by the FCC. Actually, the 2015 rules did not incorporate enterprise web access, which is often custom. They did, however, safeguard the flow of data to small businesses.

FCC chair Ajit Pai and other lawmakers take the position that the policies and practices of net neutrality are unnecessary rules which make it less likely that people will invest in broadband networks, placing an unfair strain on internet service providers (ISPs). That perspective does not seem to be aligned with that of the public, according to a poll from the University of Maryland showing that 83% of voters favored keeping the net neutrality rules in place.

What is net neutrality, exactly?
The basic notion behind the concept of net neutrality, according to a report by Don Sheppard in IT World Canada, is that the government should ensure that both all bits of data and all information providers are treated in the same way.

Net neutrality makes it illegal to have paid priority traffic, throttle, block, or perform similar tasks (see below).

Sheppard noted that there are two basic technical principles related to internet standards that are part of the basis for net neutrality:

  • End-to-end principle – Functions related to applications should not take place at intermediate nodes but instead at endpoints within networks used for general purposes.
  • Best efforts delivery – There can be no performance guarantee but instead a demand for best efforts for equal, nondiscriminatory packet delivery.

IoT: harder for startups to compete?
Growth of the Internet of Things (IoT) is closely connected to the expansion of cloud computing – since the former standard uses the latter as its backend. In terms of impact on the IoT, Nick Brown noted in IOT For All that the repeal of net neutrality will result in an uneven playing field in which it will become more difficult for smaller organizations, while larger firms will be able to form tighter relationships with ISPs.

The issue of greater latency is key to the removal of net neutrality, because latency could arise as sites are throttled (decelerated). The reason that throttling would occur between one device and another is that ISPs may want some devices (perhaps ones they build themselves) to have better performance than others.

Some people think the impact of the net neutrality repeal on the IoT will be relatively minor. However, many thought leaders think there will be a significant effect since IoT devices rely so heavily on real-time analysis.

Entering a pay-to-play era
Throttling, or slowing throughput, could occur with video streaming services and other sites. Individual cloud services could be throttled. Enterprises could have difficulty with apps that they host in their own data centers, too, since those apps require a fast internet connection to function as well.

There will be a pay-to-play scenario for web traffic instead of just using bandwidth to set prices, according to Forrester infrastructure analyst Sophia Vargas.

There is competition between wired and wireless services that has resulted from changes to their pricing models following the repeal of net neutrality, said Vargas. The pricing is per bandwidth for wired, landline services, while it is per data for wireless services. Wireless services will have the most difficulty because wired services are controlled by a smaller number of ISPs.

There will be more negotiations and volatility in the wireless than in the wired market, noted Vargas. Competition is occurring “in the ability for enterprises to essentially own or get more dedicated [wired] circuits for themselves to guarantee the quality of service on the backend,” she added.

Does net neutrality really matter?
The extent to which people are committing themselves to one side or the other gives a sense of how critical net neutrality is from a political, commercial and technical perspective. A consumer should be aware of the potential for companies to mistreat them without these protections in place (which is not to say those abuses will occur).

Ways that ISPs could perform in a manner that go against the precepts of net neutrality are:

  • Throttling – Some services or sites could be treated with slower or faster speeds.
  • Blocking – Getting to the services or sites of competitors to the ISP could become impossible because those sites are blocked.
  • Paid prioritization – Certain websites, such as social media powerhouses, could pay to get better performance (in reliability and speed) than is granted to competitors that may not have the same capital to influence the ISP.
  • Cross-subsidization – This process occurs when a provider offers discounted or free access to additional services.
  • Redirection – Web traffic is sent from a user's intended site to a competitor’s site.

Rethinking mobile apps
Another aspect of technology that will need to be rethought in the post-repeal world is improving efficiency by developing less resource-intensive mobile apps that are delivered through more geographically distributed infrastructure. Local caching could also help, and delivery of apps that serve video and images should potentially be restructured.

You can already look at file size to create better balance in the way you deliver video and images to mobile users. However, the rendering, quantity that is stream-loaded (to avoid additional pings), and other aspects are optimized with net neutrality as a given.

Providers of content delivery networks (CDNs) will need to re-strategize the methods they use to optimize enterprise traffic.

Cost has been relatively controlled in the past, according to Vargas. There is an arena of performance management and wireless area network (WAN) optimization software that was created to manage speed and reliability for data centers and mobile. Those applications will no longer work correctly because they were engineered with traffic equality as a defining principle. Hence, providers will have to adapt to meet the guidelines of the new paradigm.

The Importance of Securing Your Cloud

Marty PuranikOne of the biggest misconceptions regarding the cloud is that you can rely on the cloud provider service to protect your business, your data and everything else your firm holds dear.

Take a minute to think about your own home security system. Do you just lock the doors with the key and head off to work, fully secure that your valuables will still be there when you get back? Not likely. Many of us have at least a simple alarm system in place on doors and windows. More and more people are heading toward the latest trends in home security: motion sensors, 24-hour video cameras, remote door answering, etc.

Why does securing your cloud matter? Three enormous reasons:

  • Your cloud provider is only managing part of your security.
  • Cloud security lowers the risk of data breaches.
  • The minimum level of security compliance should never be enough.

Your security vs. cloud security
Let’s talk about your security against the cloud service provider’s security. The provider has specific language in any contract it signs with you concerning what it is and isn’t responsible for if there is a security breach. In its 2016 “Cloud Adoption & Risk Report,” SkyHigh Networks reported that the average user in an organization employed 36 different cloud services at work. That’s 36 potential security breach points into your cloud and 36 ways for information to leak out. By introducing all of the apps you need to make your business run to your cloud environment, you must take on the responsibility of ensuring that they are only serving their necessary capacity when analyzing and manipulating the data stored in your cloud.

It is integral that you manage all of your cloud-based applications and treat them all as security risks until the day you can scratch them off that list. The old days of hiring a third-party app to plug-and-play into your network are long gone. Your best way forward should be with a Security-as-a-Service (SECaaS) solution. Just like your infrastructure, software and your share of the cloud itself, SECaaS is the scalable solution that can handle your growth but also downgrade in the event your business shrinks. Even an in-person, onsite IT expert is not available 24 hours a day, 7 days a week, but a SECaaS is. The service can deploy solutions instantaneously when problems or suspicious activities arise, unlike in a traditional setting where everyone is waiting around for the IT professional to respond to a call for help.

The high price of data breaches
As for breaches, a 2016 study showed that the estimated cost of a data breach for a company is US $4 million. If your company has an extra $4 million lying around, by all means don’t fret about your cloud security. That figure might seem high at first glance, but there’s far more at work here than merely a loss of data or intellectual property. When you take a public data breach, word travels fast. Your best employees will be more receptive to offers from competitors. Your recruitment will suffer as those entering the workforce and those seeking to switch employers will take a lot harder look at what sort of company gets breached and what kind of company they’re looking to work for. And last but not least is the impact your data breach will have on your company’s public perception. The public has an incredibly long memory when it comes to embarrassing incidents for public companies. Don’t believe it? Fast-food giant Jack in the Box had a scare with mislabeled meat in 1981, and 37 years later, it’s still one of the top Google results for the restaurant chain.

Nobody wants the minimum
You didn’t get into business to do the bare minimum when it comes to protecting your assets and your customers’ information. No salesman has ever told a customer that he’d do the absolute least amount of work he could to get the customer’s business. The same excellence you strive for in taking command of your market and maximizing your profits should be applied to keeping your cloud secure.

To ensure the security of your cloud, consider adding dimensions such as multifactor security, where even if an employee’s login name and password are stolen or compromised, the party that took it still cannot access your cloud without an additional layer of security. Simple steps like this can be the difference between a secure cloud system and one just waiting to be picked apart by hackers.

What Role Will IoT Play in Edge Computing?

Adam KohnkeWhile no one doubts the power that cloud computing has on our present and future digital needs, it still has basic flaws that are cause for alarm: notably concerns over privacy of data and its ability to handle large-scale, constant computations.

The Internet of Things (IoT) continues growing at an exponential rate. Its market is estimated to reach $457 billion by 2020, a jump of 28.5% from 2016. But concerns still loom about its shortcomings when synced up to the cloud, a problem that has tied edge computing to the IoT.

If your business depends on IoT devices being able to parse data seamlessly in real-time to provide instant analysis for your processes and people, then edge computing is not some hazy future vision, but your solution for today’s problems.

Edge computing does what the cloud can’t for IoT: it reduces latency and gives the opportunity for faster processing for IoT devices that are attempting to operate in real time. Things like the new prototype self-driving cars or sensors in hospital rooms tasked with making decisions as vital signs ebb and flow risk catastrophic events if they are unable to process data instantly without delay.

Here are the three biggest reasons your company should be employing edge computing for IoT right now.

Reduced latency. Sometime this year, IoT devices will surpass cellphones in terms of number of connected devices. That’s pretty staggering for a technology most people had no idea existed five years ago. Having the edge computing device located far closer to the IoT object can drop latency speeds dramatically. Edge computing will also determine each device’s processing needs and adjust accordingly; the entirety of a company’s cloud space does not have to come online for the computations of one small IoT unit.

Upgraded network connections. Edge computing guarantees that cloud outages won’t affect individual devices by limiting interactions with the cloud. Only essential functions will be run through the cloud, which will in turn ease the burden on that environment to perform its own functions. Apps that exist solely in the cloud will have more processing power without having to compete for bandwidth against IoT machines that can successfully exist on the network’s edge.

More cohesive privacy. With new laws like the EU’s General Data Protection Regulation (GDPR) coming online in the near future, data privacy has never been a bigger topic in the digital realm. In its earlier forms, security was a low-grade concern for IoT devices like digital thermostats. But as a legion of microphones, cameras and other personal input devices join the IoT, the threat of data loss or theft becomes more real and more harmful. Edge computing can take a considerable strain off making sure data is secure by performing a number of the data’s required computational steps on the machine itself. The network can then send the data along to the cloud after it has been changed, enhancing both the speed at which it is processed in the cloud and encrypting it to lower the risk of theft.

Author’s note: Established in 1994, Atlantic.Net is a hosting solutions provider, with data centers in New York City, San Francisco, Dallas, Toronto, London and Orlando. Marty Puranik is the founder, president and CEO.

Conducting Cloud ROI Analysis May No Longer Be Necessary

Chris RichterISACA’s newly released report, How Enterprises Are Calculating Cloud ROI, is a landmark piece of research that, in my opinion, validates the notion that we have reached (or are at least rapidly approaching) that tipping-point where organizations realize that moving their IT infrastructures to the cloud is an inevitable, foregone conclusion. The white paper documents the growing trend for organizations to forgo financial ROI analyses as a way to justify investment in cloud computing, instead resorting to intangible returns, such as better application performance, enhanced business agility and improved customer- and employee-experience (Cx/Ex).

But could there also be a more subjective reason fewer companies are performing ROI analyses? There definitely is a growing perception that forward-thinking companies are moving away from internally managed data centers and infrastructure. In many of my interactions with IT professionals from major, Fortune 500 enterprises, I have heard comments like, “To stay competitive, we’re moving all applications to the cloud; anything we can’t move will die in place … ” and, “Our board of directors is asking why we are not taking more advantage of the cloud.”

The age of digital transformation is demanding our attention and driving us to a foregone conclusion: we cannot continue to do it all ourselves. As IT professionals, we are asked to reduce costs, increase productivity, stay a few steps ahead of cyber criminals, support the needs of growing mobile workforces, accommodate hybrid networks, improve Cx/Ex, all the while improving network and application performance. Enterprises, and even governments, are also under pressure to appear competitively on the leading edge by outwardly embracing digital transformation. All of this cannot be accomplished by building on top of traditional IT infrastructure and management models.

The digital age is moving at such a fast pace there may be a general sense that there isn’t time for formal ROI rigor, or that the concept of due-care is taking hold, such that it is now considered irregular and imprudent to not migrate IT functions to the cloud. There was a time when office buildings had their own telephone switchboard operators, and textile mills and factories generated their own electricity.  When the automated PBX and centrally distributed AC power emerged, it’s doubtful many organizations did an ROI analysis after it became obvious the old way was the wrong way.

ISACA’s new guidance on cloud ROI reinforces many of these notions. The habit of conducting formal cloud ROI analyses may be coming to an end as traditional IT gives way to the cloud.

5 Helpful Tips for Better IT Change Management

Anna JohannsonAs you know, change management is critical to the long-term success of every organization. This is especially true when it comes to IT, where change happens at an astonishing pace. But is your organization where it needs to be?

Guidance for Your Change Management Strategy
There is something equally exhilarating and frightening about change. It is a necessary factor in moving forward and growing as a business, but it’s also unfamiliar and intimidating. Unless you have a strategy in place for managing change – particularly on the IT front – it’s quite likely that you’ll focus more on fear and anxiety than hope and excitement.

That being said, here are some tips to help you approach IT change management from a strategic perspective.

Plan ahead
Change management is all about planning ahead and being proactive. Once an issue already has occurred, or your organization finds itself in the midst of a major shift, it is too late. Doing damage control or trying to put out fires will take valuable energy away from other important tasks. Start early and always anticipate what will happen next.

Choose the right software
You don’t have to take on change management all by yourself. Automating some of the process with the right tools can make all the difference in the world. For example, a change management software, such as a help desk, can allow you to simplify the process by providing highly customizable solutions and automated processes to manage change requests and approvals.

It’s also helpful to have some sort of communication tool integrated within your change management software that allows you to reach all key stakeholders whenever and wherever they are. The sooner people are involved in the process, the faster you can get things moving on the right track.

Choosing the right tools becomes even more important the larger the organization is. This is something the California State University (CSU) system has discovered firsthand when it comes to making key changes to its IT system.

Any IT system change that occurs on the main campus has to also go through each of the 23 satellite campuses and the thousands of employees, faculty and students at these locations. So, whereas a small change might not have a major impact at the main campus, it can have drastic effects when compounded over two dozen campuses. In order to simplify the process and make it easier to manage, CSU uses an automated change management system from Cisco that allows upgrades to occur automatically across the entire system. The results at CSU have been far better than if change management were handled manually.

Focus on the outcome
While change management is all about taking the necessary steps to move from Point A to Point B in the most seamless and efficient way possible, the focus always has to be on the outcome. When it’s all said and done, change management exists to ensure your IT department is set up for future success.

Prioritize engagement
One of the biggest mistakes organizations make is assuming that change management is all about deploying the right technologies and setting up the appropriate processes. While these are certainly important components of change management, it’s actually your employees who have to execute.

“If these individuals are unsuccessful in their personal transitions, if they don’t embrace and learn a new way of working, the initiative will fail,” explains Prosci, a leading change management firm. “If employees embrace and adopt changes required by the initiative, it will deliver the expected results.”

You have to learn to prioritize engagement of all key stakeholders; otherwise, you’ll find it challenging to make any progress. Start preparing them early and often.

Be flexible
At the end of the day, change never happens as anticipated. You may have a perfect plan for what you think will happen – and even have complete buy-in from all individuals and departments involved – but something will inevitably go awry. A willingness to adjust will serve you well.

Overlook IT change management at your own peril
The word “change” probably evokes a range of emotions. Your mind may jump to past experiences of change that were negative or unwelcome. Or, perhaps you have had good experiences with change and get excited at the thought of doing something new. But regardless of your personal history with change, you need to prepare your enterprise for the future by developing a specific change management strategy. Embracing technology-driven change management is vital if you want to be successful in the modern business world.

1 - 10 Next