A human employee works with her AI-powered robotic colleague at an office desk, demonstrating the challenges of working with generative AI.

Helping hand: Companies need to understand when using AI helps and when it could cause hidden mistakes

It is 30 years since Jurassic Park stormed the box office, gripping audiences around the globe with its special effects and its cautionary tale of unchecked ambition.

Scientists may be no closer to recreating dinosaurs. However, the startling capabilities of generative AI have prompted a similar spectrum of wonder, excitement, and concern.

Managers have raced to embrace AI, fearful that they too could face extinction if they fail to keep pace with the rapid rate of technological progress.

Two-thirds of large companies in the UK now report that they use AI, as do more than a third of medium-sized businesses. But to paraphrase Jeff Goldblum, bosses have become so preoccupied with whether or not they could use AI, they didn’t stop to think if they should. Or, more pertinently, how they should use AI.

For all its undoubted potential, there are still many unanswered questions about AI’s role in the workplace and how firms can best harness its rapidly evolving capabilities.

The pioneering research I recently conducted with colleagues from Harvard Business School, The Wharton School of the University of Pennsylvania, MIT Sloan School of Management and Boston Consulting Group (BCG) offers some important insights into how AI might both help and hinder knowledge workers. It also highlights the benefits and risks this could create for organisations and their leaders.

When should workers use AI tools?

We recruited 758 BCG consultants, who were randomly assigned to work on one of these two tasks. Both were designed to be realistic tasks that consultants might face in their regular work.

One was an analytic, business strategy task, that asked consultants to analyse the brand performance of a company and provide its chief executive with clear recommendations about which brand to focus on. The second was a creative task, whereby consultants needed to develop a new footwear product for a fashion company to target an under-served part of the market.

We divide the consultants working on each task into three groups. The first group used AI without any guidance. The second group used AI after being given a very brief training video on how to utilise it. The final group had no access to the technology.

Those using AI completed both tasks more quickly. However, there was a stark difference in the quality of their work across the two tasks.

In the creative footwear task, those using AI outperformed those who had no access to AI, by a margin of 40 percent. And while all AI users benefitted, the lower performing candidates benefitted most, closing the gap with their stronger peers.

However, on the more strategic decision-making task, those who used ChatGPT-4 performed worse than their counterparts.

Consultants who used AI were far less likely to produce correct solutions, by a margin of 20 percent. Yet their recommendations were evaluated to be of a higher quality because they were so persuasive and well written.

In other words, the professional consultants who used genAI were more likely to be wrong, but still managed to sound more convincing. That presents businesses with a significant problem.

The ‘jagged frontier’ of AI’s abilities

The divergence in results highlights one of the main problems around AI adoption. It is extremely hard to know where the limits of its capabilities lie.

You can rely on AI to help with tasks inside the ‘jagged frontier’ of its abilities and produce high quality reults. However, if you adopt it for tasks beyond the frontier, you are more likely to make mistakes.

This is complicated by the fact that Large Language Models (LLMs) remain fundamentally opaque. Sometimes they produce incorrect results that nonetheless appear plausible and highly convincing. That makes it difficult to predict where they might fall short.

Even if you manage to accurately map the jagged frontier of AI's abilities today, the technology is evolving so fast that the boundary line may well shift tomorrow.

Artificial or human intelligence?

Another major risk that became evident during our study was the decrease of cognitive diversity. This led to a smaller pool of creative ideas, even in tasks that were well-suited to using AI.

In the footwear task, those using AI often generated strikingly similar ideas. Those consultants without access to AI worked more slowly and produced lower quality of ideas on average. However, they came up with a more diverse range of proposals.

This suggests a company could benefit from relying more on human ingenuity to produce distinctive outputs. This is particularly true when radical innovation is needed and competitors rely heavily on AI.

There is no simple answer to the question, “For which tasks should companies use AI?” Instead, businesses should learn to experiment systematically and strategically with different uses.

The importance of experimenting with AI

There is arguably no longer a binary choice for companies between adopting AI or not. Nor is there a simple answer to the question, “For which tasks should companies use AI?”

Instead, business leaders should learn to experiment systematically and strategically with AI. The aim is to identify which approach suits the technology best and to find ways of using AI while monitoring the risk involved.

Indiscriminately applying AI technology could lead to productivity losses on tasks that still require greater human judgment. It could also lead to a decrease in accountability in critical tasks and reputational damage.

Equally, ignoring AI could mean losing out in the competitive race for technology adoption and efficiency. It could also deny knowledge workers the opportunity to advance their skills and focus on higher-value tasks. Companies need to create new roles and develop new forms of working and organising to lead systematic experimentation.

How to experiment with generative AI models

The optimum way to use AI will remain unclear for the near future. Therefore, the onus will be on managers to keep experimenting with the technology as it evolves.

Our research group has developed a tool to help organisations experiment with genAI and enhance how professionals use it. We are seeking to engage with organisations to experiment with our research-based tool and methodology.

Even if companies don’t stop to question whether they should adopt AI, they should ask themselves how they can harness the technology in a responsible way to make work more productive and more meaningful for employees working on the jagged frontier.

Further reading:

Beyond the hype: what managers need to ask before adopting AI tools

Who will benefit from AI in the workplace and who will lose out?

Pass the IP: How generative AI will re-shape intellectual property

Navigating the jagged technological frontier

 

Hila Lifshitz is Professor of Management and  a visiting faculty at Harvard University's Lab for  Innovation Science. She teaches Digital Transformation on the Executive MBADistance Learning MBA and Managing Digital Innovation on the MSc Management of Information Systems and Digital Innovation.

Follow Hila Lifshitz-Assaf on Twitter @H_DigInnovation.

Learn more about adapting to AI on the School's four-day Executive Education course Business Impacts of Artificial Intelligence.

For more articles on the Future of Work sign up to Core Insights.