The Responsible AI tool would shield children from harmful content online
It’s nearly six years since 14-year-old UK schoolgirl Molly Russell took her own life after being exposed to graphic online material about self-harm.
Her father has criticised the response of social media firms since as “underwhelming” and says he has no confidence that planned legislation by the UK Government in the form of its Online Safety Bill will prevent the kind of toxic content his daughter experienced.
We’re seeking to build a tool that can keep children safer and prevent online harm. This is a technology driven by Responsible Artificial Intelligence that can oversee and understand the subtleties of languages and images – an AI that can understand context. This tool could provide the extra layer of insight to flag when children are at risk and shine a spotlight on risky or damaging communication.
Content without context doesn’t mean much – tech giants know this, but are still using a system that flags words and images without understanding who’s saying what, and to whom.
Knowing what goes through the mind of an individual who’s feeling suicidal, for instance, is crucial. If technology could make sense of patterns of behaviour – what that person is searching for online, who they are talking to and how often, what they’re watching, what advice they are not seeking (for instance, ignoring suicide helpline links, despite many recommendations) – it would provide an extra layer of oversight. These are the tools that we are building.
Today harmful or inappropriate content on social media is weeded out by algorithms informed by AI but, as we’ve seen, this level of moderation can be a blunt tool wielded without insight.
It flagged, for instance, a concerned US father who sent an image of his son’s genitals to his doctor as a potential abuser. However, it failed to sound the alarm over a man who murdered his wife after searching for information on how kill to somebody and dispose of their body.
But without a helping hand from technology, it’s impossible to oversee the vast traffic of images and words exchanged on social media every day – some 500 hours of content are uploaded to YouTube every minute. Human moderators can’t keep up with what is often grim and distressing work.
Our proposed Responsible AI could be ready as a prototype in a matter of months – and offers the first step towards keeping children safe online.
We’re now sifting through vast quantities of language and images to compile ‘dictionaries’ of insights around each of the most harmful areas that threaten children and young people – and these include hate speech, cyber bullying, suicide, anorexia, child violence and child sex abuse.
We’re infusing Google’s natural language processing algorithm BERT (Bidirectional Encoder Representations from Transformers) with a layer of knowledge, which can then allow the technology to understand language more as humans do.
These insights will allow the AI to gauge nuance and context. One friend chatting to another about their frustrations is different from an isolated teenager seeking information about suicide. How do you distinguish between intimate photos from consenting adults and those sent by a young girl who has been groomed by an older man?
Technology platforms – social networks, search engines, online platforms – could use our technology as an unobtrusive layer of understanding tailored to their own recommendation engines.
This means technology that can understand the context of comments, social bonds between individuals, their age and relationships. It could flag, for instance, the age gap between the man and girl, the short time they’ve been online ‘friends’ and the differences between their social groups.
To date, lawmakers have largely allowed social media platforms to mark their own homework – prompting outrage from both carers of vulnerable youngsters and defenders of free speech. The route between moderation and censorship is a tightrope.
Meanwhile, in the UK the Government’s flagship internet regulation – the Online Safety Bill, nearly four years in the making – is treading a difficult path between free speech and protection as it enters what many hope is the final stage.
If tech businesses know that an extra level of Responsible AI could better govern reams of content, they nonetheless have little incentive to impose it. These are commercial platforms – fewer users mean less cash. Whistleblowers in recent years have spoken out against Meta’s algorithms and moderation methods and their harmful impact on individuals, accusing them of putting profit over people.
But legislators need a better understanding of the technology they are seeking to govern. If they understood what is possible – ‘intelligent’ technology that reads between the lines and sifts benign communication from the sinister – they could demand its presence in the laws they’re seeking to pass.
For every high-profile case such as Molly’s, there will be hundreds, probably thousands, of children who’ve been severely traumatised by harmful content but haven’t made the headlines.
Every day parents fear the bleak impacts of social media upon their kids – it’s our responsibility to protect them, and we’re running out of time.
S. Singh, D. J. Greaves, G. Epiphaniou (2022). Competitive Advantage in the Digital Economy (CADE 2022), p. 117–120. Conference: Competitive Advantage in the Digital Economy (CADE 2022).
Shweta Singh is Assistant Professor of Information Systems and Management and teaches Global Sourcing and Cloud Technologies on MSc Management of Information Systems & Digital Innovation. She also lectures on Gobal Sourcing and Innovation on the Undergraduate programme.
For more articles on Digital Innovation and Entrepreneurship sign up to Core Insights here.