The innovative capabilities of technology – as well as the potency of that technology – is advancing at a remarkable pace, creating new possibilities in today’s digital economy. This is mostly wonderful, with one large caveat: we must keep in mind that just because we have the ability to deploy a new technological innovation does not mean that we should. The need to prioritize digital ethics is becoming increasingly important for all organizations that are mindful about the imprint that they are leaving on society.
The transformative ways in which new technologies – particularly artificial intelligence – are being utilized calls for deeper discussions around the ethical considerations of these deployments. Depending on the organization and its level of ambition for implementing these technologies, that might even include the need for a chief ethics officer to ensure these issues receive appropriate attention at high levels of the organization. Not every organization will have the need or the capacity to invest in a new role overseeing ethics, but virtually all organizations should have their chief information security officer – or other security leadership – devote sufficient time to anticipating and addressing how their organization’s technological innovations could be misused by those with ill intent.
Last month, the European Commission took a worthwhile step toward acknowledging this new imperative, putting forward a series of recommendations that emphasizes the need for secure and reliable algorithms and data protection rules to ensure that business interests do not take precedence over the public’s well-being. As the Commission’s digital chief, Andrus Ansip, put it, “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.” Elsewhere around the globe, the Australian government is exploring policy that would seek to ensure that AI is developed and applied responsibly. “AI has the potential to provide real social, economic, and environmental benefits – boosting Australia’s economic growth and making direct improvements to people’s everyday lives,” said Karen Andrews, the country’s minister for industry, science and technology. “But importantly, we need to make sure people are heard about any ethical concerns they may have relating to AI in areas such as privacy, transparency, data security, accountability, and equity."
While governmental agencies should absolutely play a leading role in addressing these new challenges, a more comprehensive global response is needed. Encouragingly, some corners of academia are recognizing and acting upon the challenge, with Stanford and Massachusetts Institute of Technology among the institutions that are investing heavily in human-centered AI education. Existing professionals also will need guidance on how to account for the ethical implications of AI’s accelerated usage. The potential for malicious uses of AI has generated deep concern from global researchers and industry leaders, yet seldom is given due deliberation when products are being ideated and developed. The stakes are becoming far too high to tolerate such oversights. ISACA research on digital transformation shows that social engineering, manipulated media content, data poisoning, political propaganda and attacks on self-driving vehicles are leading, top-of-mind concerns for security practitioners when it comes to threats posed by maliciously trained AI.
Emerging digital ethics concerns are impacting a wide array of sectors, many of which carry inherent public health and safety ramifications, such as military training, medical research and law enforcement. Virtually all sectors are benefiting from technology advancements with the potential to drive forward huge benefits for society, but also face serious ethical questions that should not be discounted. Published data from nearly 70,000 OkCupid users raised an after-the-fact ethical firestorm about what manner of data harvesting and public release should be considered above-board. Police increasingly are facing difficult decisions in balancing new capabilities in surveillance with the privacy rights of those they are charged to protect. While AI understandably is drawing much of the recent attention when it comes to digital ethics, the ethical challenges stemming from digital transformation extend much further. Another emerging technology, augmented reality, raises several ethical gray areas, not the least of which being how to view the blurring of lines of which aspects of an experience are real. Blockchain implementations also open the door to ethical conundrums, such as how private information recorded on a blockchain could potentially be exploited. And ethical considerations will become more magnified in the coming decade, as quantum computing advancements come into sharper focus, setting in motion new ethical and security risks involving sensitive, encrypted data.
These are just a sampling of the serious issues that professionals and their organizations need to be prepared for when it comes to ethics in the digital transformation era. Increasing adoption of AI and other high-impact technologies comes with upside worthy of great optimism, but the risks, too, are increasing. Organizations owe it to the public to make sure that the rush to innovate does not make a new deployment’s trendiness or potential profitability the only measure of whether it should be greenlighted.
Editor’s note: This article originally appeared in CSO.