Surprisingly plausible: LLMs such as ChatGPT construct responses based on vast swathes of data

There is much chatter about large language models of late, divided more or less equally between optimism and scepticism. On the one hand are claims that the technology will revolutionise all aspects of our work and lives. On the other is great scepticism about its limitations, driven by concerns about truthfulness and transparency.

But asking what models like ChatGPT can do is the wrong question. As a tool, we must instead look at the system of ‘human + technology’ and ask how we can best use it, examining our own objectives and competence in using it, as well as its fitness for this purpose.

ChatGPT is a powerful predictive language tool. It can create surprisingly plausible responses to prompts, having been trained on vast swathes of textual data. Its responses are constructed without regard to truthfulness and, in its current form, without transparency into which sources the responses are based upon. Unsurprisingly therefore, it has been demonstrated to sometimes base responses on unreliable sources and to quote as fact things which are not true.

How could such a tool possibly be useful in generating business strategies, upon which the economic fate of companies depends? If painstaking human analysis of facts about markets and competitors, carefully crafted hypotheses about competitive advantage, and internally consistent and plausible action paths are replaced by unsourced and questionable factoids, how could this possibly represent progress?

Or is it that if you use GPT in a very specific manner, it can, in fact, be helpful?

Ask a friend

All strategists know to be sceptical of ‘facts’. What is true may not be inevitably and reliably so. Our understanding of a complex, dynamic situation is always incomplete. And even if something is true, there are always alternative ways of framing problems and solutions.

So the best way for strategists to think of the GPT is as a knowledgeable, confident, and very persuasive friend who is, nevertheless, prone to sometimes making things up or confidently asserting as facts things they are as likely to have read on the side of cereal box as in a scientific journal. We would, of course, be foolish to rely unquestioningly on the strategies generated by such a friend.

However, they could in fact be very useful in several ways, providing we fully apply our powers of judgement. We could tap into this friend’s extensive worldly knowledge to characterise the competitive environment. We could generate ideas to base a strategy upon. We could simulate how different strategies might play out in different scenarios and become aware of risks and contingencies. We could tap into the friend’s powers of verbal dexterity to better articulate our strategy story. We stand to benefit, so long as we apply the powers of discrimination that any competent strategist must regularly employ.

In other words, ChatGPT and similar tools are more likely to be useful to an experienced strategist than to a naïve beginner. Such tools are no substitute for the cultivation of strategic minds.

So, we explored the ways in which the powers of experienced strategists might be extended through a series of experiments with ChatGPT on different aspects of the strategising process – ideation, experimentation, evaluation and the building of stories.



The first stage for any experienced strategist is ideation. The best strategies are often creative, counterintuitive and contrarian. That all sounds easier than it is. Typically, prevailing industry or company logic drowns out “crazy” ideas. Opening up strategy to outsiders can mitigate against this constraint. We could view a large language model as such as an outsider.

We conducted two experiments to explore how this might work. In the first one, the objective was to create a new concept for a bakery. The initial response from the tool was rather conventional, suggesting the sale of savoury products only. Further prompting produced more radical ideas, including 3D printing of pastries and the use of AI to create unique flavours.

In a second experiment, we asked ChatGPT to come up with new ideas for a video streaming service. From a list of suggestions, we picked educational streaming services and deliberately steered the conversation towards the possibility of partnerships with universities, and then to the tactic of getting individual professors at a partner university to participate.

The two experiments demonstrated that responses tend to be broad or generic at first. If a strategist does not know how to ask the right questions and push the conversation into a more fruitful direction, this can result in a dead end. Once the ideas start to be more unusual, discriminatory power is required.

Despite some obvious limitations, ChatGPT has three big advantages when it comes to ideation. First, it is fast and easy to use. You can avoid all the effort of getting different people in the room. Second, the tendency toward conventional thinking can easily be overcome. Third, it’s easy to generate many ideas. The most interesting ideas were however mixed with the implausible, impractical, and untested ones. Only by sifting through these in a discriminating manner, can you get a better understanding of what might work.


Once a company has sketched out a set of ideas, the conversation turns toward selection, which for many companies, starts with thought experiments. The strategic implications of new ideas (or adjustments to existing approaches) are explored against the backdrop of different scenarios.

At the heart of the scenario planning process sit the descriptions of situations in which your strategy needs to prevail. A strategist’s judgement is required to pick the relevant parameters, but ChatGPT can take over from there, turning them into a vivid and usable narrative.

We tried this out, asking for two stories capturing the future of the travel industry. It took several iterations, but eventually we obtained two detailed and convincing stories covering both economic development in different parts of the world and consumer preferences linked to climate change as well as demography.

One story painted a convincing scenario of the future market for senior-friendly vacations and cruises, but while the predictions were sound, ChatGPT was much less attuned to actually testing out ideas. Several avenues of exploration were suggested by ChatGPT, but the details were insufficient.

A further complication is that scenario planning requires a strategy to be constant and robust under a series of different scenarios, a specification we found hard to realise with ChatGPT.

In short, you won’t be able to outsource experimentation to ChatGPT, but its ability as a storyteller makes it easier and faster to consider alternative futures.


Before a company is ready to launch a new strategy, it requires a clear picture of the market and its likely response. Will there be sufficient demand? Are there existing competitors it needs to be prepared for?

In our fourth experiment, we used ChatGPT to evaluate a business idea with these questions in mind. The idea was simple. We want to introduce a strategy making app. The potential market would likely be small companies, as many of them do not have sufficient budgets to hire consulting firms and have limited strategy expertise in-house. We first asked ChatGPT how big this potential market was. The answer was too generic, noting that 99.9% of all US businesses are small businesses. A more fruitful avenue was asking for potential competitors, which eventually led to a plausible list.

The main advantage here—as in all the other steps—is speed. It only takes a few minutes to go through the entire interaction. That’s sufficient as a first test and crucially allows you to narrow the search field for further analysis.


Humans understand the world through stories. One of the main attractions of ChatGPT is its ability to write well, complementing the blind spot of many strategists who often underestimate the importance of how ideas are communicated, wrongly assuming that what they say matters most.

In experiment five, we instructed ChatGPT to write up the strategy for a UK window manufacturer. After we provided the content, an initial list was produced. A further iteration separated the challenge from the solutions. But the real value was generated in a final step, when we asked ChatGPT to write this up in a vivid and engaging manner.

With the building of stories being both time-consuming and beyond the capacity of many managers, this might be the most useful application of ChatGPT in the strategy making process. If you provide the facts, the tool can add style, which is crucially important to communication and implementation.

Helping the best to be better

All in all, our experiments found that the true value of ChatGPT was always dependent upon the judgement of the strategist using it, and therefore the technology will likely polarise more than it democratises strategy.

We demonstrated that it could help the best strategists to be even better. But the technology won’t replace their powers of discrimination. If anything, it will shine a brighter spotlight on them.

Learn more in MIT Sloan Management Review about the three lessons that can be drawn from chatting about strategy with ChatGPT. 

This WBS feature is an abridged version of the original article published by the BCG Henderson Institute.


Christian Stadler is Professor of Strategic Management and teaches Strategic Advantage on the Executive MBA and Distance Learning MBA. He also lectures on Strategic Leadership and Ethics on MSc Marketing & Strategy. 

Follow Christian Stadler on Twitter @EnduringSuccess. 

Martin Reeves is chairman of BCG Henderson Institute.

For more articles on Strategy & Organisational Change sign up to the Core Insights newsletter here.