Other Blogs
There are no items in this list.
Knowledge & Insights > ISACA Now > Categories
When it Comes to Cyber Risk, Execute or Be Executed!

Gregory TouhillNestled in William Craig’s book Enemy at the Gates, which recounts World War II’s epic Battle of Stalingrad, is the story about a Soviet division that was plagued by failure in the face of the enemy. Desertions were rising, officers’ orders were not being followed, and the invading enemy was making gains. Faced with this calamitous condition, the regimental commander called the troops into formation and let them know that collectively, they were failing and would be held responsible. Then, in an outrageously cold manner, he walked through the ranks and summarily executed every 10th soldier until six soldiers lay dead on the field. He got their attention, and the unit was instrumental in the subsequent Soviet counterattack that led to victory against the Nazi invaders.

Obviously, I do not support such extreme and violent methods of accountability, yet the example does make you pay attention. As we grapple with today’s digital “enemy at the gates” or even the “enemy inside the gates,” the importance of accountability for failure to properly protect the information our national prosperity and security depends on has never been more important. Firing CEOs and CIOs is typically a public gesture enacted to diffuse blame rather than address the root causes. Sadly, accountability and ownership often are missing components in cyber strategies and risk management planning at a time when risks are ever-increasing. Therefore, it is critically important that all organizations better manage cyber risk by embracing a culture of accountability and ownership that guides the implementation of due care and due diligence measures.

I define due care as “doing the right things” and due diligence as “doing the right things right.” Unfortunately, I’ve found too many organizations where due care and due diligence are not occurring. For example, ask most cyber incident responders about the root cause of cyber incidents and they likely will sigh and point to the “usual suspects” – failure to patch, misconfigured systems, failure to follow established policies, misuse of systems, lack of training, etc. As someone who led incident responders in both military and civilian government organizations, I found one of the great frustrations of cyber professionals is when they see leadership ignoring or tolerating the so-called “usual suspects” and not holding people accountable for a glaring lack of due care and due diligence.

While many media reports these days focus on the very real and present threat of well-funded nation-state actors, I contend that the greatest cyber threat we all face is what I refer to as the “Careless, Negligent and Indifferent” in our own ranks. Failing to properly configure a system so that it exposes information to unauthorized personnel is an example of carelessness. Failing to patch critical vulnerabilities quickly or implement additional compensating controls until the patch is ready for promotion could be considered negligence. Failure by personnel indifferent about following established policies such as prohibiting password-sharing exposes organizations to increased cyber risk. While nation-state actors get all the hype, I contend that more than 95% of all cyber incidents are preventable and are the result of the Careless, Negligent and Indifferent in our own ranks. We should not accept this!

Do we need more legislation, regulation or policies to thwart the threat posed by the Careless, Negligent and Indifferent? Do we need to continue our habit of buying the next neat technology in hopes that its “silver bullet” defense will save the day? I don’t think so. I believe what is needed is to execute our existing policies better and hold those who do not follow those policies accountable. While we can’t eliminate our cyber risks, we certainly can reduce our risk exposure by executing our plans, policies and procedures with greater velocity and precision. When we do so, we are exercising due care and due diligence that protects our brands, reputations, customer data, intellectual property, corporate value, etc.

Accountability must be clearly defined, especially in strategies, plans and procedures. Leaders at all levels need to maintain vigilance and hold themselves and their charges accountable to execute established best practices and other due care and due diligence mechanisms. Organizations should include independent third-party auditing and pen-testing to better understand their risk exposure and compliance posture. Top organizations don’t use auditing and pen-testing for punitive measures, but rather, to find weaknesses that should be addressed. Often, they find that personnel need more training, and regular cyber drills and exercises to get to a level of proficiency commensurate with their goals. Those organizations that fail are those that do not actively seek to find weaknesses or fail to address known weaknesses properly.

Sound execution of cyber best practices buys down your overall risk. With today’s national prosperity and national security reliant on information technology, the stakes have never been higher.

IoT Security in Healthcare is Imperative in Life and Death 

May WangWe go into the hospital with a great deal of trust. We trust that doctors will help us and potentially even save our lives. Beyond hospitals, there are not many places in the world where we are willing to do anything we are asked: take off our clothes, talk about our sex lives, etc.

Recent cyberattacks, such as WannaCry and NotPetya, put this trust into question. An increasing number of cybersecurity incidents have impacted many hospitals and made them unsafe. Not only was patient information stolen and privacy impaired, but, in some cases, the cyberattacks interrupted normal operations and services. In hospitals, that could mean life or death.

Over the last decade, the healthcare industry made significant progress on digital transformation. Patients’ healthcare records are online, test results and images are digitized, an increasing number of medical devices are connected, and medical equipment can be remotely monitored and maintained. This technology has brought tremendous improvements in efficiency and convenience to medical staff and patients alike, while helping reduce human errors and lower operational costs. At the same time, however, this high level of connectivity has created a much larger surface area for security risks. Because there are so many connected devices and a large variety of different types of connected devices, it is becoming increasingly difficult to completely secure all of them at all times.

Hackers can not only use these devices as stepping stones to access critical assets, such as patients’ healthcare records, they also can compromise these devices to cause physical harm and put people’s lives at risk. For example, we demonstrated in our research lab that we can hack into an infusion pump from a leading vendor to change the dosage of the medication that is going directly into a patient’s body. This dosage change alone could be fatal to a patient.

Mid- to large-size hospitals use hundreds, if not thousands of third-party products and services. Even if the hospital itself is secured, these third-party vendors can bring in lots of vulnerabilities. Each of these third parties also uses many more other external vendors. If any of those external vendors is affected, there could be a domino effect on the hospital’s security – yet another reason it is extremely challenging to secure a hospital and all its IoT devices.

Is there a solution? In many ways, an IoT system is very similar to the human body – a large and complex system that is always on. Let’s use a heart attack as an analogy. We all know that a heart attack can be catastrophic. Although a heart attack usually happens suddenly, the conditions that make it likely actually take days, months or even years to build up. If we could continuously, automatically and intelligently monitor the heart and body, we could detect early signs of problems and take preventive actions to avoid the heart attack.

Doctors detect and cure diseases through their detailed knowledge of different parts of our body and their functionalities. Surprisingly, we don’t have similar information on IoT networks. Most hospitals we have talked to don’t have up-to-date information about what types of IoT devices they have, much less how many of these devices are connected onto their networks. So, IoT device visibility is the first task for each organization. At any given time, we need to know which devices are connected onto the network – plus, what they are supposed to do and not supposed to do – and conduct real-time monitoring of their behavior for early detection of potential cyberattacks.

Yet another challenge beyond the number and varied types of devices: these devices get on and off the network dynamically. How do we handle a highly dynamic system of such large scale? Obviously, manual monitoring is not feasible. The key is to leverage artificial intelligence (AI) to identify and monitor devices automatically, so that we can further protect them – and the hospital and its patients – in the event of a cyberattack.

In summary, visibility and AI are the keys for IoT security in healthcare.

World Economic Forum Report Reinforces Rising Prominence of Cybersecurity

R.V. RaghuThe recent Global Risks Report by the World Economic Forum offers the latest evidence that cybersecurity is rising among the top global risks. Cyberattacks are now the global risk of highest concern to business leaders in advanced economies. This reflects the inability of enterprises to keep pace with today’s challenging threat landscape, and points to an urgent need for increased prioritization of and investment in cybersecurity by executive leadership.

While a cyberattack does not qualify as a natural disaster – one of the other top risks identified in the Global Risks Report – large-scale cyberattacks are capable of devastating critical infrastructure in similar fashion. A cyberattack has the potential to disrupt many of the most essential aspects of our lives, from electric, gas and water utilities to banking and cellphone coverage.

It is evident that the status quo will not be sufficient if we are to expect a reasonable level of security in both our personal and professional lives. Society and enterprises will need to focus on resilience, both technological and human. While contending with threats may be inevitable, our ability to recover cannot be undermined. We will need to build real and virtual firebreaks to ensure critical infrastructure elements do not fall due to the domino effect of a potential collapse.

Systemic challenges and threats require systemic solutions. Enterprises must focus not just on providing the next big app or solution to customers, but also on educating customers about potential threats and actions that can be taken to prevent or address them. In this context, it was encouraging to see the World Economic Forum announce plans for a new Global Centre for Cybersecurity. Deeper collaboration between the public and private sectors – while also tapping into the knowledge base of global industry associations such as ISACA – must be part of any substantive solutions going forward.

The increasing cybersecurity challenges that accompany the expanding threat landscape also call for the constant skilling and re-skilling of the technology workforce. Enterprises must be more committed to investing in real-world training for their security teams that takes into account the most up-to-date threats and vulnerabilities. Why is it so necessary to develop a more robust, highly skilled cybersecurity and tech governance workforce? Consider several realistic possibilities that I suspect we could encounter as 2018 progresses:

  • At least half the global population could become victims of privacy breaches;
  • The Internet of Things will become the Internet of Threats. Smart appliances will be used to take privacy attacks to the next level. Your television, your refrigerator and your connected toothbrush will know more about you than any other human can;
  • The rise of superintelligent threats, driven by AI and machine learning;
  • The potential for swarm attacks by drones;
  • The first bioengineered hack of the human body.

These, and other technology-driven stress points, are unprecedented challenges that demand proactive defense strategies. Disruptive technologies have the potential to power our global economy in many promising and innovative ways, but we must nurture new and more collaborative solutions to ensure these technologies are implemented effectively and securely.

While cybersecurity rising on the list of top global threats can not be construed as good news, at least the global community has begun to recognize the scope of the challenge. Now, it is time to pull together as a global community and meet this challenge together.

Meltdown/Spectre: Not Patching is Not an Option

Alex HoldenThe most prominent data security events of 2017, such as WannaCry and Equifax, were direct results of poor patching practices. Now, 2018 is off to a menacing start with disclosure of two hardware vulnerabilities affecting most modern microprocessors and requiring a number of patches on several levels of defenses.

To clarify, Meltdown is a vulnerability that allows core system memory access by any user process, while Spectre allows an unprivileged application to access the memory space of others.

What can happen? In simplest terms, one program executed on your computer can gain access to data that belongs to other users or utilize the operating system to access data, including passwords and personal data. What is affected? Most personal computers, servers and mobile devices. What can we do about this? The simple answer: patch everything that is affected, including BIOS, OS and browsers.

If everything seems to be simple, why is this a such a big problem? The answer is not so simplistic. As far as the scope, possible vectors of attack and potential ramifications, these two vulnerabilities present perhaps the largest impact to our computer systems and networks that we have seen in a very long time.

Let’s start with the fact that it is likely that every computer and mobile device in your infrastructure is somehow affected, along with a significant number of IoT devices. Arguably, your shared environments (such as Citrix) present the greatest vulnerability, as these systems are designed for multiple users and the core design is a secure segregation between user resources.

Let’s consider the work of many of us in the security community. We need to identify all the systems and software that must be patched, test the patches, implement them and deal with “side effects.” This includes legacy systems, as the vulnerabilities include microprocessors manufactured all the way back to 1995.

Today, while there are challenges with some patches that introduce processing slowness and compatibility issues, not patching is not an option. We learned our lessons with the 2017 NotPetya ransomware, where the compromise of only one unpatched system would begin infecting the rest of the adjacent network devices.

As of now, there are no known mass exploitations of these vulnerabilities, but it is not because the hackers discounted these issues as “unexploitable.” In the world of hackers, exploitation of a vulnerability is only part of the equation. First, you must have a reliable distribution vector for the malware. Can an exploit be distributed in an email, on malicious sites or through other means to facilitate infection?

After malware is allowed to execute its exploit, it must deploy a malicious payload – a set of instructions of what to do next. Sometimes, it is an instruction set to allow victim system interaction with a Command & Control server, or it is simply used to deploy ransomware. At this stage, there must be a lot of consideration to bypass typical security controls such as anti-virus, IPS and other safety tools.

Lastly, there must be a mass monetization component – for ransomware, it is a setup to ask for a ransom, receive payments, release the encryption keys; in other cases, to facilitate data identification and exfiltration. None of these tasks are simple for the hackers and they can rarely be accomplished by a single person. Thus, nearly a month after the world became aware of the microprocessor vulnerabilities, there is still no mass exploitation.

Today on the dark web, the most common relevant conversation is not about abuse of Meltdown or Spectre. The most entrepreneurial hackers want to know if there are similar vulnerabilities in microprocessors that are not discovered and patched. Hacker bounties for these zero-day bugs are astronomical, and for good reason. No matter how good your system security is, if there is a fundamental hardware flaw, almost nothing will stop hackers from exploiting it on any vulnerable target of their choice.

Meanwhile, as hackers are regrouping and fantasizing about the unexploited data caches, let’s keep diligently patching and hope that the next vulnerability or wave of exploitation will not be brutal.

Cryptocurrency and Its Future

Sandeep SeghalThese days, everyone is trying to understand cryptocurrency. Cryptocurrency is digital money that is designed to be secure and anonymous. Most of us would be able to recognize Bitcoin as one type of cryptocurrency. Last year, the value of Bitcoin and many other cryptocurrencies appreciated significantly, though some governments and skeptics have described some cryptocurrency offerings as a Ponzi scheme. Bitcoin was established in 2009 as the first cryptocurrency. Currently, more than 1,000 cryptocurrencies are on the market. All are based on blockchain technology. Blockchain uses distributed computers to store a record of transactions and verify new transactions, without the participation of any organization for validation.

Cryptocurrency has become a hot topic. In 2017, many investors backed cryptocurrency rather than investing in penny stocks, mutual funds or other financial instruments. We have seen many new cryptocurrencies launched in the market. Bitcoin touched U.S. $20,000 in December. Ethereum, Ripple, Dash and Litecoin are other top cryptocurrencies that moved up fast in 2017, although all have declined in value in January to date.

The technology behind cryptocurrency
Let us further examine the technology of blockchain. Is it reliable? Is it difficult to hack? Blockchain is created on distributed ledger technology, which securely records transactions across a peer-to-peer network. The blockchain technology was created by Satoshi Nakamoto for trading Bitcoin. We can now see its potential reach far beyond cryptocurrency.

A distributed ledger is a record of transactions that is shared and synchronized across many computers without centralized control. Each party owns an identical copy of the record, which is automatically updated as soon as any additions are made. In blockchain, every participant can see the data and verify or reject it using consensus algorithms. Approved data is entered in the ledger as a collection of “blocks” and stored in a sequential “chain” that cannot be altered. Bitcoin and Ethereum are based on public blockchain. Anyone can read public blockchains, send transactions to them and participate in the consensus process.

The future of cryptocurrency
In my view, public blockchains in their present form might not survive. Their future form could be controlled consensus in a private, distributed ledger network. There could be many rules in a private network, one of them being that no transaction is valid unless a minimum of four participants approve it, or a central bank like Reserve Bank of India signs for each transaction. Each participant in a private network can have a legal agreement of commitments to the other participants. This trend may force banks and governments to adopt country-specific cryptocurrency.

One enterprise software firm, R3, has come up with a distributed ledger made up of many nodes which would allow for a single, global database. This single ledger would record transactions between organizations and people. This platform might fulfill financial institutions’ dream of secure and consistent transactions.

In 2018, the cryptocurrency market will continue to grow as more capital is invested. Cryptocurrency-based funds may be launched. For scalability and performance, more platforms will be born. New regulations may surface to manage cryptocurrencies as they become part of our financial system.

In the Age of Cybersecurity, Are Data Centers Ignoring Physical Security?

Anna JohannsonMaintaining a data center is a huge responsibility. While you certainly have systems in place for dealing with cyberthreats, are you giving enough attention to physical security? This is still a very important aspect of the security equation.

Five Tips for Keeping Data Centers Secure
The objective of physical data center security is pretty straightforward: keep out unauthorized people while closely monitoring those who do have access. That being said, the actual process of securing a data center isn’t nearly as simple. You have to be meticulous and comprehensive in your approach. The following tips should prove helpful:

1. Be strategic about the location. The location of your data center is paramount. You want to make sure it’s hidden away and outside of floodplains and situated in an area that can be easily secured. Ideally, the plat of land should be away from main roads and highly trafficked areas, but you also don’t want it in such a discreet location that unwanted behavior goes undetected.

2. Redundant utilities. Every little detail of your data centers matters – including access to utilities. Inadequate access could compromise the entire operation. “Data centers need two sources for utilities, such as electricity, water, voice and data,” Sarah Scalet writes for CSO. “Trace electricity sources back to two separate substations and water back to two different main lines. Lines should be underground and should come into different areas of the building, with water separate from other utilities.”

3. Install security cameras. It’s important that you install security cameras for a number of reasons. Security cameras can serve as effective deterrents. When criminals (or even employees) see a camera, they’re suddenly less interested in doing whatever it was they were planning on doing. Cameras have a way of preventing crime before it ever starts. In addition, security cameras allow you to go back and see who or what caused a specific outcome. This can be invaluable when a security issue does occur. Fortunately, today’s security cameras are more practical and cost-effective than ever. Cameras with high weatherproof ratings can withstand substantial amounts of rain, snow and dust, while still providing clear and responsive audio, video and power. And because today’s cameras are typically available at modest price points, you can afford to install as many as you need to get total coverage both inside and outside the data center.

4. Maintain a low-key appearance. Data centers are best unnoticed. In an ideal world, even your closest neighbors wouldn’t know that a data center is on the property. This means you need to nix the signage and keep the building as unassuming as possible. If you’re really serious about security, consider putting up decoy signage for a faux business.

5. Layer security. A data center should have multiple layers of security so it’s impossible for someone to gain access by bypassing just one mechanism. For example, it’s a good idea to have a combination of exterior gates, biometric checkpoints, access codes and secured cages around specific hardware. While this may initially feel excessive, you’ll never regret a multi-layered approach.

Make an investment in security
It makes no sense to build out a data center and then skimp on security – whether of the physical or cyber variety. A data center comes with massive amounts of responsibility, and organizations must do what it takes to protect their investment. By no means are these tips a comprehensive security strategy, but they do provide a nice starting point. Are you prepared? Now’s the time to take action.

Simple, Structured Approach Needed to Leverage Threat Patterns

Demetrio Milea and Davide VenezianoIT risks come from various sources that are not always easy to identify in advance, making prevention and mitigation really challenging. With the explosive growth in cloud, social, mobile and bring your own device (BYOD) computing, the attack surface is greater than ever, and new attack scenarios become possible due to the complexity of the network topology and the variety of enterprise applications and technologies that have to coexist.

Deploying threat patterns, defined as a set of characteristics featuring a suspicious behavior that can be revealed in security monitoring solutions (whether detective such as a SIEM platform or preventive such as a web gateway platform), is a great starting point for security operations teams to identify suspicious activities or potential attacks against networks, systems or applications.

However, threat patterns are complex to maintain, subject to false positives and negatives, and result in extra effort for the limited security operation center (SOC) resources, which traditionally are mostly deployed tuning their platforms and trying to identify what really matters among a huge number of alerts and indicators.

We believe that adopting a simple, structured and well-defined process borrowed from the Software Development Life Cycle (SDLC) is the key to develop and maintain those threat patterns in an effective manner. A well-designed threat pattern can lead to an increase in the threat detection rate and optimize the effort of the SOC, which can focus only on those risk scenarios that really matter to the organization.

This approach, described and detailed in a new ISACA white paper, Threat Pattern Life Cycle Development, guides the threat analyst throughout five phases:

  • Analysis, in which input data is mapped against significant use and misuse cases
  • Design, in which the logical flow and thresholds are defined
  • Development, in which the threat pattern is first deployed in the selected security platform
  • Testing, to ensure that the functional requirements have been met
  • Evolution, to ensure that the logic of the threat pattern continues to be aligned with business and risk objectives throughout its life cycle

We believe this new ISACA guidance will prove useful in putting threat patterns to better use.

Meltdown/Spectre: Moving Forward

Ed MoyleYesterday, we provided some background information on Meltdown and Spectre, the two issues that are taking the security world (and in fact users of technology in general) by storm. By now, most practitioners are probably up to speed (or getting there) on what the issues are, what caused them, and how to address them in the short term. Looking down the road though, it already is clear that even after the initial cleanup is taken care of, these issues will be with us for a long, long time to come.

This means that there are some important considerations for savvy practitioners to address beyond just sliding back into “business as usual” once initial remediation is complete. These issues represent both an area of opportunity as well as a potential wakeup call for how we approach our security programs and how we model threats in the environment. Handled well, we can leverage these issues to bolster our organizational impact and help meet security and assurance goals both now and into the future; handled poorly, though, we ensure status quo (best case) or even potentially a regression in security efforts and our overall posture.

Messaging and communication
The first area where this is true is in the realm of messaging and communication around these issues. It goes without saying that in a situation like this one, communication is vital. But it may not be apparent the degree to which good communication represents a success factor for security efforts and poor communication serves as a stumbling block. However, to the extent that we can establish communication that is accurate, ongoing and 360-degree in nature (i.e., targeting both upward communication to the board and senior decision-makers but also peer organizations and staff), doing so represents an avenue for us as practitioners to build organizational credibility and internal social capital. It can also help cement a reputation as a reliable, go-to source of information about topics like these going forward. Conversely, poor (or, worse yet, no) communication is a recipe for panic and pandemonium.

There are a few reasons why this is the case, and they go hand-in-hand with why communication is so important in the first place. First, it is a given that personnel (including executives and board members) will read stories in the press or hear information second-hand that only sometimes will be complete and accurate. Second, even when information they receive is accurate, they may draw conclusions that are off the mark. Obviously, neither of those outcomes are desirable.

The antidote to this is communication. The ability of a practitioner to help socialize actionable, complete, and risk-informed information to those other areas is an opportunity to build credibility and also to help realize the response outcomes for the organization. As an example, consider statements an executive might hear in the press; for example, that “almost every modern computer is vulnerable” and that “attackers can use the information to steal secrets.” Now, as we covered yesterday, both of those statements are true. However, it also is true that, as of now, the issue involves information disclosure only – and that we have not yet seen attacks leveraging these issues in the wild. It would of course be foolhardy to assume that these things will always be the case. That said, it is also important to temper panic-inducing statements that folks might hear in the press with a systematic and workmanlike understanding of the risks and communication about those risks to the organization at large.

How does a practitioner establish that solid and grounded communication? It starts with understanding the issues and becoming educated on how your vendors are approaching the issues, how they operate, what their impacts are to your organization specifically and what your response posture is. Once you have this information in hand, socialize it. Establish reliable and consistent communication channels (again, both upwards and laterally) to inform others, and be open and receptive to answering questions that others might have. In fact, it can be helpful to establish a response dashboard, frequently asked questions, or other information source in a frequently used repository that your organization might have, such as a corporate intranet or information portal. Even just making the effort to put that information out there alongside a few practical, non-FUD statements (with links to authoritative technical details for the curious) can go a long way.

A learning moment
Additionally, events like this can serve as an important reminder and lesson for security and assurance teams. In this case, these issues can lead to information leakage across boundaries. From a threat modeling standpoint, for example when evaluating cloud workloads or containerized applications, threat models often assume that these attacks against segmentation will not occur. 

But, yet, they do. Attacks like these serve to illustrate how things we take for granted (in this case, the segmentation barrier between processes, applications, and in fact the operating system itself) can occasionally be more porous or weaker than we often assume. It’s important to keep this in mind because there can be situations where this can impact how we plan the security architecture and how we implement controls. Last year, for example, we saw issues like Flip Feng Shui, Rowhammer, and hypervisor and container engine issues that potentially undermine the segmentation model in cloud and container environments.

This means that, for a truly future-proofed threat model, it can be advantageous to build in mechanisms that defend against a segmentation attack for high-risk situations. For example, we might choose to leverage hardware storage for cryptographic keys in a cloud situation to help mitigate against a possible segmentation attack. The point isn’t to get hung up on the specifics (for example, I’m not advocating that we store each and every single key we might use in an HSM), but instead to keep an open mind to segmentation attacks as we build security architectures, assess threat models and implement controls.

Likewise, if we haven’t taken the time to adapt systematic approaches (e.g., formalized threat modeling) but instead are kind of “winging it,” issues like this can help serve to remind us why a more workmanlike approach can be beneficial. Would a formalized threat model have informed us ahead of time that an issue like this one would happen? Probably not. However, it would give us an opportunity to at least consider an attack that undermines segmentation for a cloud environment and to build architectures that help preserve confidentiality to the extent that is possible.

See related video >>

Understanding Meltdown and Spectre

Ed MoyleThere’s a tempest in progress – and, no, I’m not talking about the “bomb cyclone” currently hitting the US eastern seaboard. Instead, I’m referring what’s going on in the technology and security communities in the wake of the newly published Meltdown and Spectre issues.

Understanding what these issues are is important for practitioners, regardless of whether you are a security, governance, risk or assurance professional: not only do these issues require action to address, but there’s also a significant amount of coverage in the mainstream and trade press. This in turn means interest from board members, senior leadership, and concern from other teams in our organizations.  Having a solid understanding about what the issues are (and a realistic understanding of the impacts, future ramifications of the issues, etc.,) is therefore both valuable and necessary.

With that in mind, let’s unpack these two issues: what they are, why they matter, and what organizations can do about them. In the days ahead, we’ll discuss more about lessons learned and how organizations can use the lessons from these issues as part of their broader planning and how to communicate up the chain about risk impacts, but first let’s unpack the issues themselves and why there’s so much hubbub about them. 

What are the issues?
Meltdown and Spectre are similar in that they both relate to speculative execution side-channel attacks – meaning, they both exploit the “speculative execution” feature in modern hardware design to enable side-channel attacks. Attacks against what? In the case of Meltdown, the isolation between user mode applications and the operating system (kernel space). In the case of Spectre, the segmentation between different applications. Key to understanding these attacks are two separate (but related) concepts: speculative execution and side-channel attacks. Let’s walk through them to explain the mechanics of their operation.

First, speculative execution. We all know that in every computing environment since Turing, computing devices execute a series of instructions in sequential (linear) fashion. A computer program instructs the processor to perform operation A, followed by operation B, and then operation C.  The order matters, and the order of execution is dictated by the program that is being run. Think about it like a recipe for baking bread: first you proof the yeast, then you add flour, then add eggs, then add water, and so on. In this analogy, the recipe being followed is akin to the program being executed while the person doing the cooking is a computer’s processor.

As anybody who has ever followed a recipe knows, though, sometimes individual steps can take a while to complete – proofing the yeast or preheating the oven, for example. Someone following a recipe that includes those steps could choose to sit idly by while those things complete (i.e., while the oven preheats or the yeast proofs), but doing so would make them a pretty inefficient cook. More reasonably, they might choose to progress to other tasks while steps that take longer happen in the background.  For example, they might flour a kneading surface, prep other ingredients they’ll need, or get baking pans ready while the oven is preheating. At a macro level, the order is the same as dictated by the recipe (i.e., preheating happens first before you put the bread into the oven), but at a micro level, steps are adjusted in their order to optimize efficiency. Believe it or not, computers do exactly this, too.

At a hardware level, a program might dictate that A, B, and C are executed in sequential order. Just like preheating the oven, though, individual steps might take longer to complete. For example, operation B might require accessing memory, which takes longer than instructions A and C, which can be done entirely within the bounds of the processor, using only the processor’s registers. To optimize performance, just like the cook moves on to other things, so, too, does the computing hardware. It might perform task C ahead of time while it waits for task B to complete. 

Speculative execution builds on this relatively simple concept but takes it one step further. In this case, it relates to the pre-execution of certain tasks dependent on the outcomes of other tasks. In essence, the processor makes a “guess” about what path will be followed based on what it has seen happen in the past. If the outcome is different, the results of any pre-execution are rolled back, and no time is lost (timing-wise, it’s the same as if the processor were idle), but if things go the way as expected, significant time savings are realized.

The second thing to understand is that these are side-channel attacks; meaning, they leverage the physical, electrical, or mechanical characteristics of the implementation (i.e. “side-channel information”) to operate.   As noted above, in the case where speculative execution occurs and the processor makes a “guess” that turns out to be wrong, what’s supposed to happen is that the processor disregards the outcome of the pre-execution and unwinds to a state from before that speculative execution.  And, in fact, that’s what happens.

However, in some cases, state changes can occur in a way that is measurable by an attacker because of the act of doing the speculative execution. Consider, for example, an attacker wishing to know whether a given value exists or whether it doesn’t in memory. In a situation where attackers can cause that value to be cached, they might be able to measure the time required to perform an access and deduce whether the value is present based on the amount of time to complete (i.e., it would take longer to complete when it is not cached versus when it is). This is an example of a side-channel attack. 

The salient point is that these attacks leverage the speculative execution functionality to learn information from across boundaries: from across the user mode/kernel mode boundary in the case of Meltdown and from across application boundaries in the case of Spectre.

What can you do?
Understanding conceptually what the issues are, the next logical question is what can be done about them. And, unfortunately, the answer to this will be different from organization to organization. That said, there are a few important things to keep in mind as we all deal with the fallout from the issue. 

First, it bears outlining that these are hardware rather than software issues. Because the speculative execution feature is implemented in the firmware and hardware on which operating systems and applications run, it impacts a wide swath of any given technology ecosystem. This means that chances are pretty good that your environment is impacted … and heavily. 

This is true regardless of whether you’re talking about technology that’s on-premise or in the cloud, regardless of the OS in use, and even in devices that you might not expect (think IoT). In fact, the scale of impact is exactly why these issues are getting the amount of press coverage and executive attention that the issues are getting. 

Second, it is important to note that there are patches available for many platforms and more on the way. It is important that you apply them as they become available and that you keep on top of the vendor information and workarounds for the platforms you use. Again, this is a hardware issue at the root of it all, so understand that current software workarounds can only go so far (in large part by disabling some of the performance optimizations that are part of the underlying issue). 

Getting to full remediation is going to be a long road to travel. The performance optimizations from speculative execution have been around for decades and these issues are only now coming to light.  And, ultimately, it’ll take hardware design changes to fully address the issues. This will take time, patience, and clearheaded thinking to address over the long haul – meaning that panicking about it now isn’t useful. 

Lastly, keep in mind that these are: a) Information disclosure issues that are b) Not actively being observed in the wild. These could both change as further research occurs or as bad guys get creative in leveraging the issues to conduct attacks. However, for now, it’s about getting access to data versus actively gaining unauthorized access to systems via this issue. The point is that keeping an objective, risk-informed posture (including a clear head and staying free from panic) is important.

5 Security Tips to Keep in Mind When Developing a New Website

Larry AltonFew things put a business at more risk than developing a website and not putting an emphasis on security at a very foundational level. Small and large businesses alike are being targeted like never before; hackers are becoming more sophisticated in their methods. If you have a loophole, they will expose it and compromise your business.

Thankfully, website security isn’t some impossible challenge that requires tons of resources to execute. Here are some practical tips:

1. Carefully analyze different site builders
Assuming you aren’t building a site totally from scratch, the first big decision you have to make in the website development process is which site builder you’re going to use. Believe it or not, this has a big impact on the integrity of your site from a security perspective.

Choosing a website builder can be challenging, especially considering there are more than a dozen reputable options on the market. The key is to find a website builders review site – such as Top10BestWebsiteBuilders – that lists a variety of options. Using one of these resources, analyze the security features and read reviews from actual users. This will tell you everything you need to know about the integrity of the platform.

2. Choose the right host
Once you choose a website builder, you can turn your attention toward selecting the right host. Again, there are a variety of web hosts to choose from and you’ll have to spend your time carefully analyzing the options. If possible, try to go with a host that offers Secure File Transfer Protocol (SFTP), which makes the process of uploading files safer.

3. Buy an SSL certificate
If you’re selling any sort of product or service on your site, you need a secure sockets layer (SSL) certificate. An SSL provides an added layer of security by creating a secure network between the computers and servers communicating with one another. It’s also a big trust factor. Many online shoppers won’t do business with websites that don’t have SSL certificates.

4. Restrict admin access
If you’re building a WordPress website and are the only person who needs access to the website, you can create an additional layer of security by restricting admin access by IP address.

This is done by opening the main .htaccess file and finding the line of code that reads “Allow from xx.xx.xx.xx.” Replace the x’s with your own IP address and now the admin panel can’t be accessed by anyone else.

5. Don’t let people know what version you’re using
Keeping your CMS updated is obviously an important part of staying protected, but don’t make the mistake of publicly advertising which version you’re using. This gives would-be hackers the advantage of knowing which vulnerabilities your website is susceptible to. Many CMS platforms automatically add the version to your site’s code, but you can find it and remove it. In WordPress, it’s found in the WP-head section.

Don’t overlook the need for security
The temptation – especially for smaller businesses – is to only take very basic security steps. The belief is that nobody cares enough to compromise a niche operation in an obscure industry – but that couldn’t be any further from the truth. Smaller businesses are actually bigger targets, because hackers know that they’re typically easy to compromise.

Any time you develop a new site, be sure you’re taking this into account. Website security is a lot easier and more effective if you prioritize it from the ground up.

1 - 10 Next