A robot that resembles a human in appearance.

Stephen Hawking once said, “success in creating AI would be the biggest event in human history. Unfortunately, it might also be our last, unless we learn how to avoid the risks."

If we are to challenge the risks foreseen by Professor Hawking, we need first to understand them.

Today AI is the most transformational technology that we have ever seen. Most technology we touch, from virtual assistants through to our social media and shopping apps, is driven by AI. It affects almost every sphere of our lives including our health, wealth,  social lives and future opportunities.

With such a pervasive and powerful presence, the AI we use must ultimately be trustworthy. And this requires AI to possess four fundamental characteristics – it must be fair, it must be ethical, it must be transparent, and it must reduce or mitigate any algorithmic bias or discrimination. This is what we mean by responsible AI.

But the AI of today has a bad reputation - recent history is littered with examples of unwitting discrimination. From the moment individuals apply for loans, for jobs, for housing, then software begins making low-level decisions, often with dire impacts upon people from marginalised communities.

Take the AI that was used by US authorities to assess potential recidivism risk (the likelihood that a defendant would reoffend).

Judges in the US used the COMPAS algorithm to help decide whether to grant bail – however the algorithm incorrectly classifies black defendants as ‘high risk’ and suggests that they are twice likely to commit a future crime against their white counterparts.

There are many other instances of AI displaying racial bias - a facial analysis algorithm used by UK authorities to check passport pictures proved deficient for black people – mistaking lips for an open mouth in some instances.

Companies are beginning to recognise their responsibilities. In June this year, Microsoft announced it would retire a number of AI-driven facial analysis tools, such as emotion recognition software, for fears that these were ‘unscientific’ and open to abuse.

The software giant is also restricting an AI-driven feature which mimics real voices – which could be abused in the creation of deepfake audio.

When discrimination occurs, we blame the underlying software. But is this the fault of the technology or does the real problem lie elsewhere?

Today we have a myopic model of AI, based on three elements – the AI training data, the algorithm (which generates the results), and the actual output. When AI models exacerbate bias and discrimination, we blame one of these three elements. However, the true solution lies in understanding the roots of human bias.

Humans, not AI,  should be blamed for discrimination

My argument is this: if you want to understand AI, you must understand human behaviour. We are all born unbiased, but then learn our prejudices from a wide range of influences such as culture, language, society, peers, values and so on. 

AI today is simply revealing deep-rooted societal discrimination through a more powerful lens. Instead we should be asking how and where these biases originate in society before they emerge in technology.

This endeavour doesn’t just require computer science, but also psychology, sociology, economics, philosophy, behavioural science – we must leverage all these to understand how we become what we are.

We interpret the world through language and we are using AI to comprehend how the meaning of words are associated in a context, in a multi-dimensional space which includes our culture, society, values and education.

This helps us understand how these elements all create layers of understanding of the world, and how to peel off these layers – that’s the beauty of our work.

If we can understand how these connections work, we can begin to see how biases creep into the human mind.  Ultimately, we want this level of responsibility to be embedded in the AI that will affect us all.

In another research project we are striving to ensure that the AI we use today is trustworthy. Here we’re working to build another layer of AI that will sit above current models and create the elements that are crucial for trust – fairness, ethics, transparency and bias mitigation – a technological fix for the technology.  

How do you go about classifying content on social media – on TikTok for instance, the platform enormously popular with young users - that is legal but harmful?

TikTok’s secret algorithm gauges users’ preferences and behaviour and drives relevant content towards them – exploiting their vulnerabilities to maximise screen time. As the platform grows in popularity, so does its potential to cause more harm.

This is impossible to monitor manually - social media generates vast quantities of data and it’s beyond the scope of human content moderators to keep track. Today, TikTok uses both human and AI-driven moderators to try to check harmful content.

Designing more responsible AI

At Warwick, we are designing an AI which can behave like humans to spot and tag harmful content that is linked to self-harm, suicide, cyberbullying, anorexia, and hate speech.

Our prototype adds a layer of responsibility which treads a difficult path, aiming to moderate content and limit harm without violating a fundamental human right; the freedom of speech. This requires a level of technological sophistication which goes a step above the moderation efforts of social media companies such as Meta (formerly Facebook).

At WBS, we’ve also been advising lawmakers on how to boost public trust in AI and how to deploy it responsibly ahead of planned legislation to regulate use of the technology.

Used judiciously, AI can tap rich seams of information and contribute valuable insights for society, and data can arise from unlikely sources.

Today, policies that affect everybody are largely made – in the West – by white men, who may have little understanding of the lives of marginalised communities. How do you begin to understand the problems they experience and design policies which can target these issues?

By using AI to analyse the language of rap music over half a century, we are beginning to understand the concerns of the African American community through their own voices.

In this project we design a responsible AI which can close the gap between their problems – expressed by rappers - and an adequate policy solution.

We also use AI to sift through Twitter data generated in outrage – at the killing of George Floyd, and in the wake of atrocities such as mass shootings – to assess how these protests transformed into policies.

My vision is to harness the power of AI and work hand in hand with authorities and industry to design responsible AI technology for a positive impact.

AI which can understand at a deeper level the origin of discrimination and bias could be groundbreaking.  We – academics, business, government – must face up to this responsibility. If we don’t, who will?

Dr Shweta Singh, Assistant Professor of Information Systems and Management explores the evolution of AI and its impact.

Equip yourself with the skills and knowledge to drive digital transformation and innovation in your organisation with our suite of Digital Innovation and Entrepreneurship Postgraduate Awards as part of the Warwick Leadership Pathways.