Two hands, each holding a mobile phone. One shows a photograph of ChatGPT creater Sam Altman stood with his sleeves rolled up and arms crossed, the other shows the OpenAI logo.

Open to change: A charity model of Company Limited by Guarantee could address key governance challenges for AI firms

When ChatGPT launched, its ability to answer questions and complete tasks with startling speed led some commentators to predict the approach of a Terminator-type scenario where machines overtake humanity.

A year on, that hype has slowly dissolved into a discussion of how to control AI, putting the spotlight on a more opaque topic: corporate governance.  

The recent drama at OpenAI – where the board first fired and then re-instated CEO Sam Altman – was a fresh reminder that the question of who is in charge of the companies pushing the technological progress is not purely an academic exercise.  

In principle, the idea of public sector ownership is appealing, but if history is any guide the centre of gravity in innovation will remain in the private sector. 

This is not to say that governments play no role. They have to develop effective regulation and particularly in the defence sector will have a hands-on approach. But as far as more general applications are concerned, investment is likely to come from the private sector.  

So how should control of AI – ie corporate governance – be structured? The more common models, including the capped profit models, are problematic. 

Instead, we suggest that the Company Limited by Guarantee (CLG) model, which is essentially a company without shareholders, should be considered.   

Why more common corporate governance models don’t work for AI

It is easy to see how the traditional company limited by shares may be susceptible to bias and manipulation from shareholders. As soon as one entity controls a significant share portion of a company, its commitment to shareholder returns puts its actions at severe risk of bias. This model is even liable to a hostile takeover by a nefarious player. So, ethics are unlikely to take centre stage in such a model.  

The alternative corporate structure adopted by OpenAI was supposedly designed to balance both innovation and independent action. Here, a capped-profit limited partnership can accept outside investment, but returns to shareholders are limited to a certain level (100x).

The capped-profit entity is governed by a not-for-profit board of directors, who supposedly protect the interest of all stakeholders and not just shareholders.

Last week’s drama showed clearly and publicly the flaws in this model, since Microsoft, which owns a 49 per cent stake in the capped-profit entity, intervened in issues of governance that resulted in the reversal of the decision to fire the CEO. Hard to think of a better demonstration of the failings of the model that was supposed to insulate against corporate interests.  

So-called B-Corps combine a for-profit approach with a rigorous corporate responsibility agenda defined by the non-profit network B-Lab and can be seen as a credible improvement to traditional corporate models.

All B-Corps attempt to replace shareholder primacy with broad stakeholder concerns under the shared belief that “businesses should aspire to do no harm and benefit all”. B-Corps are more likely to demonstrate independence from corporate interests and examples like ice cream firm Ben & Jerrys, US outdoor clothing brand Patagonia, and craft brewer BrewDog, provide some confidence that such a model inspires responsible and independent behaviour.  

Critics, however, believe that the B-Corp movement is a well-meaning but inherently biased “charade for changing the world” by the “Davos Elite”, or subject to misuse as a greenwashing or virtue-signaling device for some companies.

Since the majority of the world’s jurisdictions are yet to enact stakeholder governance statutes the model is subject to inherent biases of the B-Lab and the extent of the company’s own commitment to broad stakeholder concerns. Currently at least, a B-Lab certification would not provide sufficient protection against corporate interests controlling its AI innovations.  

Why Company Limited by Guarantee could work in the AI world 

Well-known medical insurance company BUPA and technology firm n2 Group are two examples of CLG companies. In this model, security and governance are provided by a board and a membership rather than through financial stakeholders.

This model was historically a basis for non-profits, but as in the case of BUPA and the n2 Group it can also be a highly efficient and differentiated commercial structure.  

With no shareholders there is less direct route to bias and manipulation. A CLG is susceptible to manipulation only through the membership, whose powers are more limited and diluted.

This small risk could be further mitigated by tighter control of the membership process. For example, by requiring that members display some experience or knowledge of governance, or a demonstrable track record of community or non-profit engagement.   

It is fair to say that the value of the CLG has not been adequately explored. While challenges exist in ensuring that the membership is impartial and adequately reflects the societal aims of the CLG, it can be argued that this model is much less susceptible to bias and influence than the other corporate structures discussed.

For example, the NAG Library, a software product owned by the n2 Group, has been trusted to provide incredibly accurate and robust numerical calculations in production calculations across financial services, aerospace and nuclear fusion for decades, in part due to the CLG structure and the independence that it affords.  

Society is at the cusp of placing enormous power and trust in the hands of AI technologies developed, maintained or governed by commercial entities. It makes sense to assess the various options that limit the innate risks in this, and the CLG model should be one that we all take a closer look at.


Adrian Tate is Chief Executive Officer of NAG (Numerical Algorithms Group). He is studying for a Doctor of Business Administration (DBA) at Warwick Business School.

Christian Stadler is Professor of Strategic Management and author of Open Strategy: Mastering Disruption from Outside the C-Suite. He teaches Strategic Advantage and Strategy and Practice on the Executive MBA and Global Online MBA.

Learn more about strategy on the four-day Executive Education course Strategic Choices at WBS London at The Shard.

For more articles on Strategy and Organisational Change sign up to Core Insights here.