



In 2021, I made the decision to leave an almost 20-year career in IT audit, compliance and risk management to pursue a PhD in cybersecurity. Now that I am almost at the end of my PhD, this blog post reflects my journey.
Finding a problem
My PhD was part of a four-year doctoral training program in the UK. The first year was designed to be a refresher for all enrolled students (regardless of background) to research and academia; for me, this was a revision of research methodologies, but also a window to the academic view of cybersecurity. While some of the topics covered overlapped those I encountered in the late 1990s, when I last attended university for my master’s degree in information security (as it was called then), there were also stark differences.
First, the social sciences aspect to cybersecurity was emphasised. For example, my master’s concentrated on BS 7799 (the precursor to ISO 127001) and was quite technical except for one module that looked at people and organizations. This must have resonated with me as my career more or less ran in parallel with both themes. For the PhD, the nature of the program was very much socio-technical. This required understanding the psychological aspects, such as why developers, attackers and victims behave the way they do in cybersecurity related incidents or scenarios.
The psychological aspect was, however, not novel, which leads on to the next difference between the degrees I attended. For example, my PhD program also emphasised the importance of software via the seminal Saltzer and Schroeder paper from the 1970s. I had no recollection of this paper from my master’s (which does not mean it wasn’t mentioned then); this was due to the degree being more corporate-focused. However, my current program is more academic but still aimed for industry partnerships. In fact, I have been fortunate to meet representatives of some of the partners, and while I cannot divulge the organizations, they are of importance in many aspects of cybersecurity, engineering and civil society.
Settling on a question
For me, the most important aspect of the program was ascertaining what I would want to spend the remaining three years researching. By the end of the first term, I was certain that I wanted to stay close to a governance-related topic, as my work experience was one of the reasons my application to the program was successful. However, I sought to incorporate this with an entirely new topic.
One of the mandatory modules was called Responsible Innovation. This was essentially a framework for enabling research and innovation for social good by deliberating on its impacts, both positive and negative, in an anticipatory manner. This was new to me as, during my career, my work was retrospectively assessing the adequacy of controls applied to risks, whether in audit, risk or compliance. I worked on very few thematic analyses, which is where I would expect some form of anticipatory risk assessment. However, the level of anticipation expected from Responsible Innovation was greater and required the input of the public, not just academia or industry. This was an important aspect which I feel is often neglected in corporate governance; hence I aimed to find a research question that would require engagement with the public in some capacity.
At the time, the ‘metaverse’ had been announced; while I had personal reservations about the concept, I believed it would be a good topic to focus on. My research proposal board agreed but advised me to concentrate on a specific use case. Finding one was harder than I first envisioned until two events. First, I helped a local hospice on a voluntary project, and there, someone mentioned that companies were seeking to virtually resurrect the dead in the ‘metaverse’. Second, ChatGPT was released, which was ironic as I was advised to steer clear of researching AI as there were already ‘too many’ academics covering it.
Responsible use of AI and emerging technology
I certainly got my wish of researching an area far from corporate governance, but in some ways complements my work experience. I never thought I would be called a ‘Thanatechnologist’, i.e., someone who studies matters regarding death and dying but focuses on the technology aspects. I also did not expect to research technology governance from a social sciences perspective.
However, academics raised concerns of the societal impact of technological innovations since the 1970s. In 1980, the Collingridge Dilemma established the problem of introducing controls against the pace of technology, which remains valid today, despite AI having started around the time of Alan Turing. ‘Responsible’ is common in AI governance discourse, but interpretations vary of its meaning in academia, for industry, government and the public. My own research is a grain within the whole debate. While I did indeed interact with participants across industries to gage personal views, so much more needs to be done. Further, there is the risk of hype with emerging technologies, such as innovation to virtually resurrect the dead.
Overall, despite difficulties in resetting one’s thoughts from corporate to academic and the loneliness (as a PhD is such a solo effort), I am grateful for this privilege. I sincerely hope my work will be of value not just to this profession but to everyone, as it seems that AI has become as inescapable as death.
Editor’s note: For further insights on this topic, read Khadiza Laskor’s Journal article, Examining the Ethics of the Digital Afterlife Industry - ISACA Journal, volume 3, 2025.