Data protection used to be a simple compliance task. Most of the data protection laws are based on the Organisation for Economic Co-operation and Development (OECD) Privacy Framework Basic Principles. The core of this framework can be summarized as transparency—the purposes of personal data collection are made known and justified to individuals and their implicit or explicit consent is obtained before collection and processing. Furthermore, if an enterprise wants to change the use of personal data to a new purpose, the enterprise must obtain individuals’ consent before proceeding.
It all sounds just about doable, but the enterprise must also consider somewhat disruptive big data analytics, which indiscriminately collects massive amounts of data with the hope that a previously unforeseen insight will suddenly be discovered. This being the case, one would wonder how the now-contradictory concepts of transparency and big data analytics can be reconciled when an enterprise begins with no idea of the use it may have for the personal data that are collected for big data analytics.
While regulators continue to call for transparency or anonymization of personal data to reduce harm, data controllers and privacy think tanks are arguing for a paradigm shift from strict compliance to a risk-based approach that has put considerable stress on the importance of ethics in this increasingly data-driven world. For example, the Information Accountability Foundation (IAF) has a 4-part Big Data Ethical Framework Initiative that has caught the interest of the Office of the Privacy Commissioner of Canada, which has given the IAF a grant for the project. The initiative aims to finally develop an assessment and enforcement framework on big data analytics that is based on ethical consideration.
Going further in technological development, the next big thing following big data analytics may be artificial intelligence (AI). A recent report from the Executive Office of the President of the United States and prepared by the US National Science and Technology Council acknowledges the difficulty of transparency in artificial intelligence and suggests that, “Ethical training for AI practitioners and students is a necessary part of the solution.”
While hard rules appear to be failing in the world of privacy because of technological advancement, ethics now rises as a viable alternative for core value—something that will require a lot of pondering by policymakers and lawyers.
Read Henry Chang’s recent Journal article:
“An Ethical Approach to Data Privacy Protection,” ISACA Journal, volume 6, 2016.