The proliferation of Internet of Things devices is well-documented, with the potential for more than 20 billion connected things by 2020. Installations of connected devices are spanning virtually all industries and cover just about any use case that can be imagined.
With such an enormous volume of connected devices and minimal regulation, it comes as little surprise that many of them have been programmed incorrectly and are supplying users with false or misleading information.
“So, how do you look at scenarios like that?,” said ISACA board director R.V. Raghu during Wednesday’s session on IoT audits at EuroCACS in Edinburgh, Scotland. “It can become very dangerous.”
IoT audits should align with enterprise needs and ensure a compliance approach is factored in from the outset. Auditing IoT can help address a wide array of important questions, including each of the following:
- How will the device be used from a business perspective, and what business value is expected?
- What threats are anticipated, and how will they be mitigated?
- Who will have access to the device, and how will their identities be established and proven?
- What is the process for updating the device in the event of an attack or vulnerability?
- Who is responsible for monitoring new attacks or vulnerabilities pertaining to the device?
- With whom will the data be shared?
In the case of IoT, the answers to these questions can have urgent implications. Raghu used a nuclear plant as an example, saying that the capacity to interpret accurate data in timely fashion can guard against potentially damaging irregularities at the plant.
“We want to be able to pick up the data at the right point and then tell you, this is what we need to do,” Raghu said.
Privacy considerations need to be taken into account by IoT device manufacturers, given the enormous capacity to gather data. Encryption might need to be built into devices to protect potentially sensitive information, such as with medical devices used by hospitals.
“Do we need to get greedy and collect everything that is possible, or do we only collect the data that makes sense to us?” Raghu said. “And, in the post-GDPR world, that is a very important question to ask.”
Raghu also expressed concern that regulation of IoT devices is lagging behind the surging usage, meaning there is little standardization on the IoT landscape.
That puts even more of a premium on strong risk management and robust controls. Among the baseline controls that should be put in place for IoT devices are identity and access management, malware protection, transmission confidentiality and time-stamping. Raghu also highlighted “Level 2” controls, such as patching, vulnerability management and log management, saying many organizations do a subpar job with their log management.
“People don’t want to do the log analysis, and if you don’t do the log analysis, you don’t understand how the device is behaving, and you could have a serious problem on your hands at some point,” Raghu said.
Whether affecting security in homes, in hospitals, in cities’ critical infrastructure or just about any other setting of today’s society, the ramifications of insufficient IoT security can be serious. Raghu said IoT audits should emphasize the importance of continuous monitoring, as prescribing fixes months after the fact can be far too late.
“You don’t have that kind of luxury here,” Raghu said. “You might need to fix it on an ongoing basis, on the fly, so it becomes very important you have a real-time status on this.”
ISACA recently conducted a smart cities research survey in which it asked approximately 2,000 security and risk professionals questions focused on smart cities and their management, risks, and future technology initiatives. As a recovering city CISO, I can tell you that many of the survey questions were typical ones asked about smart cities. One question that caught my eye regarded what technologies were believed to be essential for the “security/resilience preparedness” of smart municipalities.
This question was of interest to me because city environments are collections of disparate systems. I used to joke that cities were packrats: they keep technologies beyond their typical lifespans due to the scarcity of resources needed to replace them. This mixture of legacy and up-to-date solutions can lead to environments with challenging levels of risk. As the CISO of a leading global smart city, I found one of my best assets for managing organizational risks was visibility into the municipality's operations, data flows and network infrastructure.
I say this in part because, of the five answers available for the survey question on resiliency, “Advance Data Analytics” was the most selected answer in terms of what professionals believed smart cities need to perfect the preparedness of their security efforts. I am sure this is a shock to some. Many would have thought new tech or cutting-edge science would be what smart cities need. In actuality, the use of data analytics to manage scarce resources and highlight anomalous behavior is a better value because it provides visibility.
Data analytics and the technology platforms that incorporate it can be leveraged to orchestrate incident response teams’ reactions to business continuity events and to target efforts for isolating and remediating incidents. These platforms provide substantial visibility, and can give municipal security programs context surrounding risk exposure to their organization, as well as assist in the selection of controls and processes used to remediate threats.
Smart city CISOs and security teams face business operations, enterprise infrastructures and unique datasets related to the use of new smart sensor technologies. These challenges can be managed, I believe, with the use of advanced data analytics platforms providing visibility into how sensitive data, critical networks and citizen-facing operations are stored and used in a safe manner that protects the assets entrusted to them by their neighbors.
About the author: As Chief Information Security Officer (CISO), Gary Hayslip guides Webroot’s information security program, providing enterprise risk management. He is responsible for the development and implementation of all information security strategies, including the company’s security standards, procedures, and internal controls.
Gary is an active member of the professional organizations ISSA, ISACA, OWASP, and is on the Board of Directors for InfraGuard. Gary holds numerous professional certifications including: CISSP, CISA, and CRISC, and holds a Bachelor of Science in Information Systems Management and a Master’s degree in Business Administration. Gary has more than 28 years of experience in Information Security, Enterprise Risk Management, and Data Privacy. Connect with him on LinkedIn or Twitter.
The healthcare industry has been revolutionized as the result of new technologies, advanced data collection methods, and the growth of cloud solutions. It’s equal parts exciting and intimidating. The only question is, are you staying up to date?
It’s time to take IT responsibilities seriously
In an age where data integrity is becoming increasingly important, healthcare organizations continue to be targeted and exposed. The Sixth Annual Benchmark Study on Privacy & Security of Healthcare Data, released last year by Ponemon, shows how serious the situation is.
“For the sixth year in a row, data breaches in healthcare are consistently high in terms of volume, frequency, impact, and cost,” explains Dr. Larry Ponemon, chairman and founder of the Ponemon Institute. “Nearly 90 percent of healthcare organizations represented in this study had a data breach in the past two years, and nearly half, or 45 percent, had more than five data breaches in the same time period.”
It’s not just outside attacks and data breaches, though. If you look at this industry, it’s clear that regulatory compliance – in the face of shifting digital requirements – is also a major challenge.
It’s time for healthcare organizations to slow down and focus on what they’re doing to protect themselves, their data and their clients. Here are a few IT-related suggestions to get the ball rolling in a positive direction:
1. Invest in training. You can implement sophisticated data platforms and develop intensive processes that protect patient data and promise to reduce risk, but it all comes down to the people. Your employees – i.e. the end user – will always be the weakest link in the chain. If you aren’t investing in training and providing them with the resources they need to be successful, then you’re compromising your entire approach.
2. Try predictive analytics. Regulatory compliance is obviously a chief concern in today’s environment, but you don’t have to feel like you’re constantly playing catch-up. With the right system in place, you can take a proactive stance and add value to your organization.
Many leading healthcare organizations are turning to predictive analytics. For example, a platform like IgniteQ uses proprietary algorithms and organization-specific CMS data to provide real-time analysis of how your company lines up with industry benchmarks and what you can do to improve quality of care, MIPS scores, and overall performance. This forward-facing approach is far more effective and powerful than the typical review-based strategy.
3. Get serious about limiting access. Nothing is worse than having your patients’ data stolen. Professional hackers can use this information to hack into bank accounts, steal identities and cause havoc for everyone involved. And while you may not be directly blamed for data theft, you’re almost always indirectly responsible.
The smartest thing you can do is limit access to protected patient data. The fewer people who have access to the data, the less risk there is that confidential information will get into the wrong hands. Nobody should be able to access patient information unless they have a specific need for it. Loose policies in this area will come back to bite you.
Putting it all together
It’s easy to feel as if your organization is immune to the larger problems of the industry. Data breaches and compliance issues are things that other, less responsible organizations deal with. But this simply isn’t true. No modern healthcare organization is safe.
It’s imperative to understand that breaches and mistakes can come from both inside and outside the company. In an effort to strengthen your organization and safeguard data, you have to account for both forces. Better training, a focus on predictive analytics and initiatives to limit access to confidential information will give you a good place to start.
CISOs have traditionally focused on the triad of “Confidentiality, Integrity and Availability.” Recently, emphasis has been placed on confidentiality, hackers and zero-day attacks. However, industry trends now require that focus to broaden to all business information risks within organizations.
Since information is a key part of almost all business transactions, information risks are becoming pervasive. The trends I want to highlight include increased need for Security departments to partner with business colleagues to understand risks from their point of view, and increased importance of integrity and availability.
In my mind, integrity issues go back to the ChoicePoint data breach in 2005. This breach did not result from a zero-day attack. It was carried out by fraudulent customers using fake accounts. This falls under the “data integrity” mandate. At the time, many would have thought that this breach was outside of the scope of information security. But this needs to change today.
Such incidents have taken off in recent years. Fake news incidents have regularly made headlines. The potential effects of fake information on SEO results also have been highlighted. Consider the reports of identity “theft” using synthetic identities. Or the recent scandal at Kobe Steel over the internal falsification of quality data.
After the Yahoo breaches cost that company US $300M, cybersecurity assessments have become a more important part of M&A transactions. This type of assessment has to mitigate business risk. Is the firm’s risk posture what it says it is? Class action lawsuits in the state of Michigan for faulty software algorithms bring up another information business risk. Software development errors may have real human life consequences as well as business consequences.
In the recent volatile financial market, several investment firms suffered outages, even in our era of scalable, virtualized application architectures. Ransomware attacks last year led to real money being lost from victims, not from ransoms, but from outages. The largest ever DDoS attack recently was reported. These attacks are likely to continue to be common.
This is still an important issue, but the diversity of incidents is increasing. An ex-Expedia employee pleaded guilty to stealing company information to facilitate his insider trading of company stock. Better keyless entry systems now facilitate faster theft by car thieves, not just theft of information. In 2016, steelmaker ThyssenKrupp lost trade secrets to cyber criminals. A large retailer recently was hit with a $27 million fine for stealing a small contractor’s intellectual property. Instead of just stealing IDs, criminals are now stealing whole systems and the intellectual property that goes along with those systems.
These incidents highlight newer ways to misuse information resources and adversely affect a business. More longstanding hacker attacks using technology are not going away; traditional technology controls are still needed to mitigate these risks and significant progress has been made in doing so. But these newer incidents highlight threats in which the misuse case and consequences are highly entwined with the business. To find these risks, CISOs will need, more than ever, to understand the business they are protecting and the risks that are seen by senior management. Security controls will need to be more integrated in business operations to be effective.
A recent presentation by Facebook CISO Alex Stamos also highlighted these issues. In his talk, Stamos distinguishes between two components of technology risk: traditional InfoSec and “abuse.” He defines abuse as “technically correct use of a technology to cause harm.” In his view, the abuse category of risk is much broader than the traditional InfoSec concerns. Some of his solutions to better manage the abuse category of risk include broadening the focus of security practitioners and increasing empathy toward business users and leaders.
My own conclusion is: if the issue involves company information, and misuse can affect the company’s risk posture, then CISOs need to play an active role in mitigating that risk.
If you are interested in virtual reality, you surely know that the buzzword of 2018 is “standalone.” All the major VR companies are betting on standalone VR devices: HTC Vive China president Alvin Wang Graylin announced in a recent interview that his goal for 2018 is to see standalone devices becoming successful and Oculus’ Hugo Barra has expressed a similar opinion.
But what are standalone VR devices? And why do all of these important people believe in them? Let me answer these questions for you.
What is a standalone VR device?
The typical virtual reality headset can come in two flavors:
- Connected to a PC for an expensive, high performance experience (e.g. Oculus Rift and HTC Vive);
- Integrated with your mobile phone for a cheap, low quality experience (e.g. Gear VR and Daydream).
Figure 1 Oculus Go standalone headset (Image credit: Oculus)
Standalone VR sits somewhere in the middle between these two extremes: it is a good quality experience, for an affordable price. But its peculiarity is that standalone VR headsets do not require anything else to work: they don’t need a phone or a PC; they work out of the box. A standalone device is similar to a mobile VR headset, but it already includes all the required electronical parts, it already embeds the display, the processing power and all the other hardware inside. It is a computer on its own.
Figure 2 Vive Focus device (Image credit: HTC Vive)
This means that the user can buy it, unbox it and then put it on his/her head to start living VR experiences immediately.
Why are all the companies betting on them?
Standalones offer a lot of clear advantages over the other available VR devices:
- They are affordable. A standalone VR headset costs less than a Samsung phone plus GearVR or than an Oculus Rift plus VR-ready PC. Some standalones are really cheap: the upcoming Oculus Go, for instance, will cost only US $200, and this will let a lot of people afford entering virtual reality;
- They are easy to use. They don’t require setups of any kind. Every person can use them, even without technological expertise. The user just has to just put the device on his/her head. This means that virtual reality may exit the techie realm and enter into the consumers domain;
- They are handy. It is very easy to carry a headset with you by just putting it in your backpack;
- They come in various flavors, like:
- very cheap standalone devices, such as the Oculus Go and Pico Goblin, that offer a very basic experience;
- more expensive devices that let the user move inside virtual reality, like the Vive Focus and Lenovo Mirage Solo;
- Oculus Santa Cruz and Pico Neo that offer an expensive experience but with the ability to move and interact within the virtual world.
In my previous post, I highlighted how price and ease of use are two of the pain points of virtual reality. Standalone devices can solve both. They can make virtual reality mainstream and can be the key to eventually get 1 billion people in virtual reality, as Mark Zuckerberg wants. That’s why always more companies are betting on this form factor.
There’s a big issue that I want to highlight: in the very short term, standalones are VR-only devices, so they require people to spend money just to experience virtual reality. But the general consumer still doesn’t understand the purpose of VR and, in fact, a lot of free Cardboards and Gear VRs gather dust on the shelves. This means that the various manufacturers will have to convince people why they need to spend money to have VR.
Standalone devices will be important for VR widespread diffusion. But, as you can see, the road to mainstream adoption is still long.
As one who attends a number of industry conferences, it’s almost a guarantee that you will hear the cliché question “What issue keeps you up at night?” posed to enterprise security executives on stage.
While the question may be monotonous, the responses can trigger lively exchanges, especially in today’s cybersecurity landscape. Contending with the proliferation of connected devices, ransomware attacks, insufficiently trained security teams, a shortage of security personnel, rapid changes to the threat landscape, and responding to board concerns are just some of the many relevant issues that emerge from those who answer that seemingly “routine” question.
But there’s nothing routine about it. Thanks to the moving target of the rapidly changing threat landscape, many CEOs, CIOs, CTOs, CIOs and board members themselves are having some restless nights. As if the volume and complexity of security risks aren’t enough, most of these stakeholders are unsure of where their organization stands in its cyber security capabilities and resilience. Absent that fundamental understanding, how can boards of directors effectively assess critically important investment decisions to strengthen an organization’s security posture? How can directors on those boards gain assurance that the organization is taking the steps necessary to enhance its cybersecurity resilience, mitigate risks of attack, and have confidence in the organization’s capabilities to respond to an attack should one occur?
Assessing the strength of cybersecurity programs – people, processes and technology – must be viewed through the lens of enterprise risk and by measuring maturity, all with the objective of building organizational cyber resilience. Risk scenarios should be evaluated in terms of likelihood and business impact. Based upon the risks that rise to the highest level of concern in that context, the capabilities most important to mitigating those risks can be identified. Importantly, establishing sound audit processes must be an integral piece of ensuring the appropriate risk mitigation.
Many organizations rely upon outdated compliance frameworks to reduce risk. This approach has proven insufficient, as evidenced by ISACA research showing that less than half of security leaders are confident in their organization’s ability to combat anything beyond simple cyber incidents. The reality is most are ill-equipped to confidently conclude what their organization is or is not prepared to handle. A more comprehensive, evidence-based approach is urgently needed.
It is time for the industry to coalesce around a risk-based capability and maturity model for cybersecurity, one that draws on organization-specific evidence and analyses as a catalyst for strategic, purposeful action. One such assessment platform has been developed by the CMMI Institute, which ISACA acquired in 2016. CMMI developed its approach after hundreds of conversations with board directors, C-Suite executive and other industry leaders. While each person came at the discussion from the perspective of their own industry experiences, a common need emerged: produce a consensus-based model, grounded in the most appropriate and recognized industry standards, so organizations can understand their level of cyber resilience – and how to strengthen it – through a comprehensive, risk-based approach.
Organizations need guidance in framing the business case for enhancing their cybersecurity programs, and to prioritize resources and investment to focus their programs on what matters most. The CMMI assessment platform, which will be released in April, does exactly that, while also providing organizations a road map to identify gaps through evidence-based analysis. This platform also will enable organizations to benchmark their capabilities against peers in their industry and in their geographic areas. The results are board-ready, presented in simple to understand business terms.
Recalibrating our approach to risk and security is only going to become more vital in the coming years, as inferred by unsettling new research on potential malicious uses of artificial intelligence. AI and other disruptive technologies will make the security landscape increasingly challenging going forward. Boards of directors are often characterized as being cavalier or apathetic about enterprise security. However, that assessment is unfair. The board conundrum is shaped by a combination of insufficient cybersecurity expertise and a lack of actionable information that can ensure security risks are integrated into the overall enterprise risk analysis.
Equipping enterprise leaders with a risk-based capability and maturity model represents a major step forward for those who seek an evidence-based understanding of where their organization stands on cybersecurity and what steps are being taken to fortify their programs and enterprise resilience. With a more quantified analysis of the enterprise’s current state, a roadmap to improved cyber resilience, and proper consideration of the financial investment to get there, the necessary due diligence on this complex, mission-critical area will result in improved oversight from board directors and confidence in their organization’s capabilities. Only then will leaders sleep better knowing that their organization is assuring the most secure path possible going forward.
Editor’s note: This blog post originally published in CSO.
Some form of risk management occurs on a daily basis in any organization currently in business. In many enterprises, risk management activities are ad-hoc, compliance-based, focused on the latest threat in the news, uncoordinated, and use arbitrary means for analyzing whether the risks warrant any action. As a result, enterprises are not benefiting from a systematic, coherent means to manage the risks that have the greatest potential for business impact.
Risk management is the process of identifying, analyzing, and responding to conditions throughout the day-to-day enterprise operations with an eye on meeting the business or mission objectives developed in the strategic planning process. Often enterprises take on a certain amount of risk, defined as risk appetite, in order to achieve one or more objectives. This risk-taking can have a positive or negative impact on the enterprise and must be managed within limits, or risk tolerances, in order to know what actions or behaviors are necessary to achieve success and minimize negative consequences.
Including risk management as a focus area of the broader enterprise governance activities is necessary to align the stated mission, vision, values and actions of the enterprise with the management activities needed to ensure those objectives are met. The responsibility of effective governance is to align the risk behaviors with the organization’s risk appetite and tolerance. If your board or other senior leadership governance committee is only receiving information on risks from the audit committee, then there is a lack of knowledge about the difference in roles between risk management and audit. Each area, risk management and audit, has a role to play in the organization, but senior leaders cannot expect to understand the totality of the risks that the enterprise faces with only audit committee internal control and compliance-related reporting.
Whether your organization is just getting started with a more formal risk management process or you have a process but want to make sure it is aligned with best practices, there is a new ISACA publication, Getting Started With Risk Management, that can help. The guide is intended to build awareness of the risk management process and is not focused on control selection or internal control deficiencies as contributing to risk. The paper’s reviewers are primarily risk practitioners, and I believe this guide has the potential to improve the effectiveness of your enterprise through the implementation of sound risk management processes.
As you read the publication and begin to use it in your enterprise, please feel free to pass along constructive feedback and leave it in the comment section of this post. This will allow us to understand what worked well and what could be improved. In the meantime, we hope this guide is beneficial to you and your organization.
Early in my career, I had the opportunity to work with big retailers and non-profit organizations around the promised land of EDI protocol (Electronic Data Interchange, for those too young to have seen this acronym). The expectation in the industry was that, thanks to a common set of industry layouts adopted by both manufacturers and retailers, all transactions like purchase orders, confirmation of shipments, acknowledgment of receipt of merchandise, and payment of invoices, would be streamlined and automated.
When we see the outcome in retrospect, we understand that the aim for a perfect set of common layouts for flat files that would be sent from one computer to other, over dedicated communication channels, to be fed into a translator that would, eventually, create the purchase order or shipment notice in the recipient’s mainframe, required long negotiations between powerful stakeholders. As a result, new technologies totally bypassed this effort that had been running for more than 30 years, without becoming a mainstream protocol in the e-commerce era.
I mention this example of a too-late definition of standards because of recent efforts triggered by the Central Bank of Mexico to create a common database of all fund transfers in foreign currency performed by banks in the country. The aim seems to be a central repository built by all participant banks, feeding their own funds transfer transactions, to eventually allow those banks to query this database in order to understand the risk profile of any particular client that has performed funds transfers in any other bank.
This goal is ambitious and logistically complex. Being a regulator of the banking system in the country, the Central Bank of Mexico can define the rules as needed and then require all banks to comply with these definitions. But the analogy I provided in relation to EDI protocol comes immediately to mind, and I foresee the following issues:
- The central bank has defined a standard layout based on the data elements that would be relevant to create the initial repository for its own regulatory purposes.
- The banks will have to build interfaces from their existing funds transfer systems with this new platform.
- The central bank may require additional fields in the future; if so, all banks will have to rush to adjust their existing interface, and then run additional processes to fill the missing data in the central repository.
- There is no incentive for the banks to implement the required applications and infrastructure.
- When rules are established around types of relevant queries needed to determine a risk profile, some large banks may then identify additional information that would make sense to add to the repository, impacting all other participant banks.
- The storage and computing power needed to track all funds transfer transactions across the entire banking system will overwhelm the central bank’s computing capacity, leading to delays in the queries. This would eventually require more taxpayer money to buy or rent additional infrastructure.
This seems to be a perfect use case for a Know-Your-Customer (KYC)/Anti-Money Laundering (AML) blockchain project. Of course, most of you understand that blockchain technology is the foundation of bitcoin and other cryptocurrencies. Instead of focusing on the idea of actual payments made with cryptocurrencies, I’d like to highlight the fact that blockchain technology can provide the perfect tool to develop a distributed ledger of funds transfers, spread across the computing power of participant banks in the system.
Here are the incentives for all:
- Banks’ computing power is larger. Spreading the calculation of crypto-tokens representing the funds transfers across the system can be spread over the computing power of all banks that want to participate in the system.
- Crypto-tokens would be simple. We are not talking here about creating money but “crypto-tokens” that represent real funds transfer transactions occurring in the system, linked to a different type of crypto-token that represents the client performing the transactions.
- Banks have incentive for participation. Every time that a bank converts a funds transfer transaction into a crypto-transaction linked to a crypto-client, using its own computing power, it will receive a “crypto-token” as payment. These crypto-tokens will be the key for the banks to perform queries to the database (see below).
- Queries to the common database will be paid with crypto-tokens. Every time a bank wants to perform a query to determine what kind of transactions a particular client has performed in the system, it will pay using the crypto-tokens received as payment for linking crypto-transactions to crypto-clients.
- Bank privacy is preserved. Do I need to say more?
As regulators start creating laws to put some ground rules on the table for digital transformation, they could be participants in initiatives like the one I’m putting on the table today.
Author’s note: Jose Angel Arias has started and led several technology and business consulting companies over his 30-year career. In addition of having been an angel investor himself, as head of Grupo Consult he participated in TechBA’s business acceleration programs in Austin and Madrid. He transitioned his career to lead the Global Innovation Group in Softtek for four years. He is currently Technology Audit Director with a global financial services company. He has been a member of ISACA and a Certified Information Systems Auditor (CISA) since 2003.
If you’re not seeing the results you want, you may need to switch SAP implementation partners. SAP implementation is becoming more important than ever, with revenues from enterprise resource planning (ERP) software expected to reach $84.1 billion by 2020, according to Apps Run the World. Not only does this technology help your organization become more efficient, but your top competitors are following suit – so you’ll need to increase your pace and attention if you want to keep up.
What happens if your chosen SAP implementation partner isn’t giving you what you need? What qualities should you look for in a new provider?
Signs It’s Time to Switch
If you notice any or all of the following signs, it’s a good indication that it’s time to switch providers:
- Significant delays. According to Clarkston Consulting, it may be time to switch if your partner is a cause of significant (or frequent) delays. Delays happen to everyone from time to time, but excessive delays are usually a product of failure to plan and/or a lack of sufficient resources to finish the job.
- Budgetary constraints. If your partner keeps running over budget, or if pricing has changed significantly, you may wish to find a partner better able to project – and maintain – previously estimated pricing.
- Lack of availability. How hard is it to get in touch with the leadership at your implementation partner’s organization? There should be an open line of communication at all times. If you feel like you’re in the dark, it may be time to switch.
- Significant shifts in vision. Has the implementation vision changed significantly from what it was at the outset? You may want a new partner who can more reliably achieve and sustain the current vision.
Qualities to Look For
These are the qualities you should look for in a new partner:
- Size. Size may not seem like an important feature in an implementation partner, but it has a few different effects. A bigger firm will be able to provide you with greater resources and a more reliable guarantee that your implementation will go smoothly, thanks to backups and more specialists on staff. However, a smaller firm might be less expensive and could provide you with a more personal experience. There’s no right answer here, so consider all size options and pick the one best suited to your company’s needs.
- History. According to Angela Nadeau, your implementation partner should have a track record of success. Certainly, talented new firms and partners may be perfectly capable of successfully partnering with you. However, for a large-scale implementation, it’s safer to choose a firm that has been around for years – and has the client testimonials and reviews to prove it.
- Areas of expertise. Not all implementation partners have the same areas of expertise. For example, some may specialize in certain types of software, while other may specialize in specific industries. You may need a specialist, or you may need a “general” implementation partner with a wider range of specialties that can flexibly serve a variety of different needs.
- Culture and personality fit. You’ll work closely with your partner throughout the implementation process, so make sure they’re a good culture and personality fit. Ideally, they’ll have the same core values as you and your firm, and their representatives will be able to work closely with your staff with no issues.
- Cost. Obviously, consider the cost of your implementation partner as well. If you pay more, you may gain access to better systems, more specialists and more experienced representatives – but if you don’t need all that, you’ll be able to save money with a less expensive option. If your budget has a hard limit, your decision will be much easier.
- Understanding. Finally, you’ll need to choose a partner that can understand your enterprise’s unique needs. Implementation shouldn’t be a one-size-fits-all, cookie-cutter approach. Talk to multiple candidates and lean toward the options that hear and are willing to address your specific circumstances.
Choosing a new SAP implementation partner doesn’t have to be a painful experience even though there are many options available. With enough research, you’ll inevitably find one that can serve your specific needs. Before you begin the process, outline the specific qualities that are most important to your organization, and start weeding out the candidates that can’t meet those criteria.
Editor’s note: Motivational business speaker Caspar Berry will bring his unique poker player’s perspective on risk to his opening keynote address at EuroCACS 2018, which will take place 28-30 May in Edinburgh, Scotland. Berry recently visited with ISACA Now to discuss topics such as overcoming the fear of failure and the dynamics of risk-aversion. The following is an edited transcript:
ISACA Now: You contend that all decision-makers are investors. What do you mean by that?
I mean that all decisions are investment decisions when you break them down. All decisions are resource allocation decisions. Allocations of money, yes, but often time. Sometimes we measure time in hours, but sometimes in less tangible units of passion or patience or dedication. All these resources are limited, and all are being allocated in a world of inherent uncertainty. By that, I don't mean the next five years. The next five minutes are uncertain … So, we're all investors, because everything we do is an allocation of a scarce resource in an uncertain world with a view of getting some kind of return on that investment by a variety of different criteria. That’s what investment is, at a fundamental level, and that's what we're all doing thousands of times a day.
ISACA Now: In an enterprise context, how can decision-makers push past their fear of failure?
In any context, the key to pushing through fear of failure is to understand what fear of failure is and where it comes from. Actually, what we colloquially call fear of failure is the product of two – arguably three – very prosaic psychological phenomena acting on us all the time. At the basic level is what we call “loss aversion,” a product of the diminishing marginal utility we get from most things we consume. Then there's time preference, which encourages us to seek short-term rewards and thus eschew long-term investments or delayed gratification. Then there's our judgment, which makes us pessimistic that new things can work compared to old things that apparently do.
In pretty much all these cases, the trick is to think long-term. In many contexts, that is an act of overriding our basic psychological hardwiring, which is still mostly – though not totally – designed to get us home safely at the end of each day. But, a poker player is not concerned about results at the end of any particular day. It’s irrelevant to us. We're concerned with maximizing our long-term expected value. If you do the right thing, then the law of large numbers gives you what you deserve at the end of the long term.
ISACA Now: Do you believe that millennials and other younger members of the workforce are any more or less risk-averse than those who came before them?
I don't think there is any data that proves this either way, per se. A lot of metrics for measuring risk tolerance are bunkum, anyway. Much of what we call risk tolerance is actually a product of the timeframe of judgement that people are thinking of the consequences of failure within ... don't get me started.
My gut, however, says that broadly speaking, millennials are inherently no more or less risk-tolerant than older generations. I hate the brushes with which a lot of people tar the millennial generation. That video on YouTube of Simon Sinek saying how short-term they all are and how they're all the beneficiaries of nepotism are also bunkum. (That’s logically impossible by the way; how can they all be beneficiaries of nepotism over people of their own generation??) Not a single statistic is cited. You may as well ask a man in a pub.
The reality is that risk tolerance is a product of genetics (which won't have changed noticeably between boomers and millennials) and circumstance. So, for example, if someone sees less future in following the tried-and-tested path of university and then a corporate job, they may be inclined to take more risk in their life, but no more than their parents would have done had they been in that situation. Indeed, look at my grandparents’ generation. They put their lives literally on the line every day in a way that we would never think of doing, but only because the alternative was Nazi occupation. They weren't genetically different; they just had different situations producing different upsides and downsides, or what economists call incentives.
ISACA Now: How does one become a professional poker player? What set you on that path?
Oh, that's easy. Poker is the easiest career in the world to get into. You just go to Las Vegas ... put your money on the table, and ta da!