A hand holding a digitally generated lightbulb that is wearing a mortarboard academic cap and surrounded by idea icons

The AI genie is out of the bottle. How should leaders respond to harness it for the greater good?

What happens when an unstoppable force meets an immovable object? Absolutely nothing is the answer, because, as the ancient paradox goes, two such entities could not possibly co-exist in the same universe.

Yet this is precisely the conundrum now confronting us as everything we once took for granted is thrown into doubt by artificial intelligence. It is the question being asked of entrepreneurs who until very recently would have spent days, if not weeks, brainstorming the name and the mission statement of a new start-up, but who can now feed their challenge into a generative AI system and receive a workable formula within minutes. It is the question being asked too of students, who, as of November last year, have been able to ask ChatGPT to write their essays.  

So, what should we do? Surely we must act fast before this technology spins out of control.

Certainly, many are arguing the case to do just that. But in my view this is not the time to shy away from generative AI. Instead, the time has arrived for us to seize the day. Carpe Diem.

Before I explain how exactly we can do this, though, let’s look at the arguments over AI now raging in academia – the world to which I belong.

How AI progress is overcoming concerns

Although large language models (LLMs), the family of generative AI technologies to which ChatGPT belongs, are basically just word predictors, their capabilities are already impressive and improving by the day. LLMs can write and edit documents, condense longer texts into shorter summaries, or assign labels or categories to written passages based on their content or sentiment, and that’s just for starters. These neural network models can also reason about texts using learned knowledge, give feedback on them, and even write code.

The worry for academics is that these are the very things they are supposed to do. They are supposed to be the ones marking the student essays or writing the research paper. Not a machine.

Of course, many people remain unconvinced by the power of LLMs. Common concerns centre around plagiarism, a lack of creativity, and fake and low quality content generated by models that draw information with little discrimination from a vast pool of online material.

I believe the answers lies in the fact that AI is changing at an exponential rate. For example, concerns about fakery are already being addressed, with the rate of ‘hallucinations’ down to three or four per cent.

If a lack of creativity was once a fault line, there is now growing evidence to suggest otherwise. 

Generative AI is picking up new skills all the time. In a phenomenon called ‘emergence’, LLMs are starting to exhibit capabilities that we did not anticipate. We ignore this exponential growth at our peril.

Too late to turn back on AI

So, to go back to the original question about what is to be done, my opinion is that we in academia in particular need to recognise that the genie is out of the bottle. We cannot go back. Instead, what we need is a fundamental rethink of teaching and research. 

One option would be to encourage and support the use of AI tools by students and faculty. This need not undermine the integrity of the essay writing process. Instead, the focus would be on fostering the diversity of thought and creativity that lie behind an essay – including the creativity and prompts that students give their neural network model – while leaving the writing to the machine.

Turning to the production of research papers more specifically, it is important to note that the kinetics of traditional scholarship and AI are very different.

It can take one to two years to generate research output; the same amount of time for peer review. In the near future, AI could hasten this process so long as we practice due diligence.

Academic journals should require all authors to certify that they have vetted all the ‘facts’ generated by AI. Authors should submit also documentation detailing the AI tools they have used in the production of their paper. Humans would remain in the loop. The rule of thumb being that you can use AI, provided you are honest and open about it with your peers.

In the medium term, bespoke LLMs offer another way forward. Trained to focus on specific datasets or tasks in a particular domain to meet the needs of users in those fields and improve their performance, these fine-tuned language models are already performing exceptionally well.

A case in point is LegalBERT developed by the University of Zurich, which has been specifically designed for legal text analysis and classification. Other up-and-coming GPTs have scientific and financial texts in their sights. Although all of these concentrate on ring-fenced content, they also fill in the gaps with the more comprehensive coverage offered by general language models.

Despite their early success, however, bespoke LLMs are a missed opportunity so far. So, let’s build more of them. Here at the Gillmore Centre for Financial Technology, for example, we are building a GPT that will focus on the 500 or so research papers that our academics have written in the fintech space in order to make them more accessible. 

Businesses need to adapt to AI

The Gillmore Centre is also in the business of giving advice to enterprises trying to find their way through the new AI universe. On the one hand, AI offers all sorts of opportunities to companies looking to improve their processes. On the other hand, it can pose legal dangers if the content it produces is not regularly scrutinised by human beings.

Another issue is privacy. Never type something that is business-sensitive into an LLM!

Overall, my advice for companies is that, like the rest of us, they will have to fine-tune their response to AI.

But if we all manage to adapt to this fast-moving situation, then all of us stand to benefit from the huge powers of LLMs to create and categorise large bodies of text, and to interrogate them too.   

The trick, I believe, is a change in mindset. Returning to that age-old paradox of the unstoppable force and the immovable object, we might do well to remind ourselves of one of those scientific principles that we all learnt at school: that in a universe where mass and energy are one and the same, all forces are unstoppable. Thus, when a force hits an object, it is the energy that it carries which is transferred to the other object.

As astonishing and as alarming as the impact of AI may be, there is a strong case that we should embrace that energy, and harness it for the common good.

Further reading:

Eshghi, A., Gopal, R. D., Hidaji, H. and Patterson, R. (2023) Now You See It, Now You Don’t: Obfuscation of Online Third-Party Information Sharing, INFORMS Journal on Computing

Gopal, R. D., Hidaji, H., Kutlu, S. N., Patterson, R. A. and Yaraghi, N. (2023) Law, economics and privacy : implications of government policies on website and third-party information sharing, Information Systems Research

Kumar, A., Gopal, R. D., Shankar, R. and Tan, K. H. (2022) Fraudulent review detection model focusing on emotional expressions and explicit aspects: investigating the potential of feature engineering, Decision Support Systems, 155, 113728


Ram Gopal is Professor of Information Systems Management and Director of the Gillmore Centre for Financial Technology. He teaches Digital Transformation on the Executive MBA and Global Online MBA, and Blockchain and Cryptocurrencies on the MSc Management of Information Systems and Digital Innovation.

For more articles on Digital Innovation and Entrepreneurship sign up to the Core Insights newsletter here.