3D rendering of an AI robot seated with

AI: Could navigate novel problems with human-centred judgement

A new study is the first to propose realistic ways of integrating wisdom into artificial intelligence, with the aim of developing systems that are more robust, transparent, cooperative, and safe.

Researchers, including Warwick Business School’s Professor of Behavioural Science Nick Chater, outline methods for training large language models to behave more wisely, exploring new architectures that can support wise reasoning, and developing benchmarks to measure AI wisdom.

The timing is critical: as AI capabilities rapidly advance, wisdom is not keeping pace, raising concerns about safety and reliability.

“Artificial intelligence is getting smarter every day, but one important human skill it lacks is wisdom,” said Sam Johnson, professor of psychology at Waterloo and co-lead author of the study. “Wisdom isn’t just about knowledge or intelligence. It’s about the mental skills needed to handle life’s challenges, such as making difficult decisions or navigating unpredictable social situations.”

The researchers note that while current AI systems excel at well‑defined tasks, they falter when problems are messy or ambiguous because they lack the broader set of strategies humans use to manage uncertainty. The new approach focuses on teaching AI to think about its own thinking — or metacognition — enabling it to recognise the limits of its knowledge, adapt to different contexts, weigh multiple viewpoints, and remain flexible as situations unfold.

“Wisdom has seemed too philosophical, too human‑centred to formalise for machines,” said Dr Igor Grossmann, professor of psychology at Waterloo and study co‑lead. “But by breaking it down into specific strategies such as intellectual humility, perspective‑seeking, and context adaptation, we can create a concrete roadmap for building AI that doesn't just compute, but reasons wisely.”

According to the researchers, wiser AI systems could better navigate unfamiliar problems and environments, collaborate more effectively on shared goals, provide clearer explanations to users, and act more safely by aligning more closely with human values.

“If the smartest person in the world were a toddler, we still wouldn’t hand them the nuclear codes,” Johnson said. “AI is increasingly resembling a child genius, still needing a healthy dose of wisdom from its human parents.”

Researchers from the University of Waterloo, Université de Montréal, the Max Planck Institutes, Santa Fe Institute, Stanford University, Warwick Business School and Google DeepMind contributed to this work. The researchers’ next steps include collaborating with industry to develop computational models of human wisdom to guide AI design.

Johnson, S.G.B., Karimi, A.-H., Bengio, Y., Chater, N., Gerstenberg, T., Larson, K., Levine, S., Mitchell, M., Rahwan, I., Schölkopf, B., & Grossmann, (2026) Imagining and building wise machines: The centrality of AI metacognition Trends in Cognitive Sciences, DOI: 10.1016/j.tics.2026.01.002


Further reading:

Why do systems create problems then nudge us to fix them?

Can AI predict people's future illnesses?

What are the advantages and disadvantages of nudging?

Is bias causing business leaders to make mistakes?

 

Nick Chater is Professor of Behavioural Science and teaches Behavioural Sciences for the Manager on the Executive MBAExecutive MBA (London)Global Online MBA, and Global Online MBA (London).

He also teaches Judgement and Decision Making on the MSc Business and FinanceMSc Accounting and Financial Management, and MSc Accounting and Sustainability.

Understand the science behind human decision-making to solve challenges within your organisation with the three-day Executive Education programme Behavioural Science in Practice at WBS London at The Shard.

Discover more about Behavioural Science and Decision-Making. Receive our Core Insights newsletter via email or LinkedIn.