A picture of a person with a strip across the eyes torn away to reveal an AI-driven robot interior

Research conducted at Warwick Business School and other institutions will be crucial to better understanding the opportunities, the risks, and limitations of AI

Science fiction writers are fascinated by humanity’s capacity for self-destruction and AI has provided them with the inspiration for many great Hollywood villains. 

Some, like the android Ash in Alien or Hal in 2001: A Space Odyssey, strike far from home, where no-one can hear you scream. Others target us from a post-apocalyptic future, like the Terminator. 

Some even create alternative AI worlds to control and cultivate us, like The Matrix

But the most unsettling are those that lie just beyond the frontiers of current technology – haunting glimpses of a future that is one scientific ‘breakthrough’ away. 

Picture Alicia Vikander’s sublime but manipulative robot Ava in the Oscar-winning film Ex Machina

No wonder the advent of ChatGPT has tapped into our collective concerns, compounded by calls for greater regulation from Geoffrey Hinton (the Godfather of AI), tech industry luminaries such as Steve Wozniak and Elon Musk, and even ChatGPT creator Sam Altman himself. 

Will robots really outsmart humans within 10 years? Is humanity hurtling towards an AI apocalypse? 

“Hinton talks about autonomous weapons and labour market implosion. The tone is pretty terrifying,” says Matt Hanmer, from the Gillmore Centre for Financial Technology at Warwick Business School. 

“And that’s just the tip of the iceberg. We are probably witnessing the most rapid technological development since the industrial revolution. It’s like a gold rush, with the emphasis on being the first to develop new technology and get it out there. 

“As a result, AI has been unleashed on companies and governments and they have to adopt it to survive. The question is, how do they do that in a responsible way?” 

There is no doubt that AI – an umbrella term for a range of emerging technologies including machine learning, deep learning neural networks, and large language models (LLMs) – poses huge challenges, or that recent progress has surprised and alarmed experts who have worked on it since the 1980s. 

However, current models like ChatGPT are unlikely to emulate science fiction’s all-conquering automatons – at least not yet. 

In reality, the threats created by AI are not shared equally by humanity as a whole. 

The greatest risks are faced by those sectors of society that are already vulnerable to disruption.

What are the limits of generative AI?

Nick Chater, Professor of Behavioural Science, has known Hinton for 30 years and continues to work at the intersection of cognitive science and neural networks, editing the book Human-Like Machine Intelligence in 2021. 

“ChatGPT is incredibly good at collecting the information you ask for, compressing it, and filling in the gaps,” he says. 

“If you start typing in a line from Shakespeare, it will finish that speech for you. But when it reaches the end, it will keep going. It will try to improvise and write gibberish. 

“As impressive as these large language models may be, they are still only query answering machines. You put information in one end and it shoots out the other. 

“They aren’t mulling things over and they certainly aren’t plotting world domination. That’s the positive side, but it doesn’t mean it’s wrong to be worried. 

“Recent breakthroughs have been truly astonishing – even Geoff [Hinton] didn’t see them coming. That is really unusual. These innovations normally do exactly what we expect or a bit less. 

“It means we can’t be sure what will happen next. When the first nuclear explosions were carried out, some scientists said they didn’t know if it would break down space and time and destroy the Earth. They didn’t think that would happen, but they couldn’t be sure. AI is similar.” 

Professor Chater is part of the €8 million TANGO project, funded by the European Union’s Horizon Europe research and innovation programme to develop trustworthy AI. 

His focus will be ‘virtual bargaining’ – the process of deciding how to act in a given situation by imagining what we would do if we had time to negotiate with the other party, then reach an implicit agreement – and how AI might replicate this vital human process. 

“Comprehension and negotiation are the cornerstone of human social interaction,” he says. 

“These are the challenges that must be met to create AI systems that work collaboratively alongside people, rather than serving as valuable computational tools. We aren’t much closer to that. 

“A more pressing concern is that systems like ChatGPT could become the arbitrator of all information. There is a real danger that people will become reliant on these models for information. It would give tech companies – and the governments behind them – unbelievable power. 

“That is one of the huge challenges facing regulators.” 

How will different countries regulate AI?

Another is the lack of a universal approach to regulation in different countries. 

The COVID-19 pandemic demonstrated how difficult it is to achieve consensus and co-ordinated action across continents, even in the face of a common threat. 

How to respond to the rapid progress of AI is provoking similar differences of ideology. 

Europe seems set to take a leading role in seeking to regulate AI, hoping that the controls it puts in place will be adopted by multinational companies that are keen to do business on the continent and cascade around the world, much like they did for GDPR. 

The US is some way behind but is also considering stricter rules in the wake of an open letter from the Future of Life Institute in March, calling for AI labs to pause training systems that would be more powerful than GPT-4 for at least six months to allow regulators time to catch up. 

The letter was signed by more than 1,000 tech leaders, including Apple co-founder Wozniak, Twitter owner and Tesla CEO Musk, and deep learning pioneer and Turing Award-winner Yoshua Bengio. China, on the other hand, would prefer more targeted regulation of algorithms. 

James Hayton, Pro-Dean at WBS and Senior Research Fellow at the Institute for the Future of Work, says: “Governments and governing bodies across the world are rushing to understand how to balance the need for social protections, with the desire to facilitate rather than constrain innovation. 

“The UK is emphasising innovation, while the EU is emphasising protection. As a result, there is a clear risk that restrictive policies in one jurisdiction may drive innovation to other countries, at the expense of the cautious nation.” 

It is an added incentive for governments, regulators, and individual organisations to embrace the opportunities that AI has to offer. 

The UK could be one of the biggest beneficiaries, as it is better AI-prepared than many countries. Consulting giant McKinsey predicts that AI will increase GDP by 22 per cent by 2030. It could contribute close to £800 billion to the UK economy by 2035. 

The key will be to adopt AI in a socially responsible way that recognises both the risks to firms and stakeholders whose lives will be affected, such as the SISA strategy tool developed at WBS. 

Ram Gopal, Professor of Information Systems Management and Director of the Gillmore Centre for Financial Technology, which is developing a new GPT focused on 600 research papers that its academics have produced, making them more accessible to a wider audience. 

“In my view, this is not the time to shy away from generative AI,” he says. 

“We need to recognise that the genie is out of the bottle. We cannot go back, so we ignore this growth at our peril. 

“Concerns that much of the content generated is fake or low quality are already outdated. AI is changing at an exponential rate. 

“The rate of ‘hallucinations’ is now down to three or four per cent, and while a lack of creativity was once a fault line, there is now ample evidence to suggest otherwise. It can even write code. 

“There are certainly challenges for businesses. For example, LLMs can pose legal dangers if the content produced is not regularly scrutinised. Privacy is another issue. 

“Companies, like the rest of us, will have to fine-tune their response. 

“If we manage to adapt to this fast-moving situation, then all of us stand to benefit. As astonishing and alarming as it may be, we should embrace AI and harness it for the common good.” 

WBS researchers are already working on AI tools to help tame the 'Wild West' of social media by protecting children and vulnerable adults from harmful content. 

These ‘robocop programmes’ could force social media giants to take greater responsibility for creating a safer space online, without resorting to the kind of widespread censorship that would create concerns about freedom of speech.

Will AI improve healthcare and sustainability?

Healthcare is another area where the potential of AI has generated great excitement. 

An ageing population has brought more complex care needs and rising costs – a challenge exacerbated by the fallout from the COVID-19 pandemic. AI offers the tantalising prospect of tools that could ease the pressure on overstretched medical staff and resources. 

Hospitals from Scotland to Switzerland are now trialling AI as a diagnostic tool for radiologists in the hope that it will accelerate and augment the decisions they make. 

However, an in-depth study in the US has tempered expectations. Three departments at a major hospital used AI tools to examine breast cancer, lung cancer, and bone age scans. 

In all three, radiologists found the technology created greater uncertainty as its results differed from their own initial judgements without providing a clear, underlying reason. 

The study, which was co-authored by Hila Lifshitz-Assaf, Director of the AI Innovation Network at WBS, found that just one of the departments consistently incorporated AI results into its final judgements. 

It may be a matter of time before developers overcome such challenges. Until then, a more likely prospect is that LLMs could be used to ease the burden of paperwork on doctors. But that’s not all. 

Scientists have already used AI to analyse 6,680 compounds and discover nine potential antibiotics, including one that can kill the deadly superbug A baumannii

Eivor Oborn, Professor of Healthcare Management and Innovation, says: “Artificial intelligence could have exciting applications for pharmaceutical companies, enabling them to explore different combinations to develop new drugs. 

“It could also help doctors create personalised treatment plans for patients with diseases like cancer. 

“However, robots are unlikely to replace doctors. When we are sick or at the end of life, we value that human quality – compassion. AI is not going to replicate that any time soon.”

AI could also help tackle the looming crisis of climate change. Scientists have warned that emissions must be almost halved by 2030 to avoid the worst effects of global warming. 

One hope is that activist investors can use AI to identify companies with strong ESG goals, helping them exert more pressure on firms to prioritise environmental concerns. 

“The problem is it’s very difficult for AI to look at all the different aspects of ESG and decide whether a company ticks a single box,” said Isabel Fischer, Reader in Information Systems. 

“It’s also difficult for AI to detect greenwashing. If a firm claims it is delivering on a green agenda, it will put certain things in place that appear to be environmentally friendly, but a more robust check reveals things are more nuanced.” 

Dr Fischer, who teaches digital leadership and innovation, has developed a teaching case on Rho AI, a firm that has moved from the broad search terms requested by its clients to focus on more specific datasets such as carbon footprint, which produce narrower, but more reliable results. 

“There were really high expectations of what the technology could achieve, but it wasn’t that sophisticated,” she says. 

“It had to go through a period of disillusionment to moderate those expectations and target the data more effectively to produce more meaningful results. 

“If you look at generative AI like ChatGPT and Bard now, some of my academic colleagues will tell you we are at the peak of a similar hype curve. 

“Generative AI is probably not as good as a lot of people think it is and it is not ethically black or white. It’s important to have a more balanced view. 

“It comes back to the golden rule of AI: if you put garbage in, you get garbage out.” 

So, the likelihood is that AI is neither our saviour nor the sword of Damocles hanging over us. It will have a significant impact on the way we work, how we live, and the healthcare we receive. But we have yet to fully understand the consequences. 

Research, like that being conducted at WBS, will be crucial to better understand the opportunities, the risks, and the nuances associated with AI and to help regulators harness it for greater good.

Further reading:

Chater, N. (2023) How could we make a social robot? A virtual bargaining approach, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences

Lebovitz S., Lifshitz-Assaf H. and Levina N. (2022). To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosisOrganization Science 33(1):126-14. 

Fischer, I., Beswick, C. and Newell, S., (2021) Rho AI – Leveraging artificial intelligence to address climate change : financing, implementation and ethics, Journal of Information Technology Teaching Cases, 11, 2, 110-116

For more articles on Digital Innovation and Entrepreneurship sign up to Core Insights here.