A picture of algorithms on a computer screen

Moody blues: Algorithmic inertia was at the heart of Moody's failure to warn of risky mortgage securities in the run-up to the financial crash

Organisations are increasingly turning to sophisticated data analytics algorithms to support real-time decision-making in dynamic environments.

However, these organisational efforts often fail — sometimes with spectacular consequences.

In 2018, real estate marketplace Zillow launched Zillow Offers, an 'instant buyer' arm of the business.

It leveraged a proprietary algorithm called Zestimate, which calculated how much a given residential property could be expected to sell for.

Based on these calculations, Zillow Offers planned to purchase, renovate, and resell properties for a profit.

While it had some success for the first few years, the model failed to adjust to the new dynamics of a more volatile market in 2021. Zillow lost an average of $25,000 on every home it sold in the fourth quarter that year — resulting in a write-down of $881 million.

This is an example of what we call 'algorithmic inertia' when organisations use algorithmic models to take environmental changes into account but fail to keep pace with those changes.

How Moody's failed to see the financial crash coming

To understand the phenomenon of algorithmic inertia, we conducted an in-depth study of another organisation that failed to respond to changes in the environment: Moody’s, a financial research firm that provides credit ratings for bonds and complex financial instruments such as residential mortgage-backed securities (RMBSs).

These securities aggregated bundles of individual mortgages into distinct tranches with unique characteristics during the period leading up to the global financial crisis of 2008.

Moody’s made a concerted effort to account for environmental changes in its credit ratings by developing a proprietary algorithmic model in 2000 called M3 Prime. The model analysed data about properties, mortgage holders, and the economy to estimate two parameters central to calculating a credit rating: expected losses for the mortgage pool and the loss coverage protection required for a security to maintain a AAA rating.

An analyst would present a recommendation to Moody’s credit rating committee, which assigned a publicly posted rating for the security.

Moody’s monitored these ratings and upgraded or downgraded RMBSs as the environment changed.

The M3 Prime model achieved early success, so in 2006, Moody’s expanded its scope of algorithmic analysis by introducing a derivative model, M3 Subprime.

Between 2000 and 2008, Moody’s provided credit ratings for thousands of RMBSs, but it ended up downgrading 83 per cent of AAA-rated mortgage-backed securities valued at billions of dollars by 2008.

The US Government, along with 21 states and the District of Columbia, held Moody’s responsible for the role that its inflated ratings of those and other products played in precipitating the financial crises. In 2017, the agency agreed to pay $864 million to settle the allegations.

What are the four sources of algorithmic inertia?

This is a particularly illustrative example of algorithmic inertia with devastating societal consequences. Moody’s decisions offer an excellent context for exploring algorithmic inertia because the financial institution was explicitly responsible for analysing environmental changes as part of its core service.

Moreover, we were able to access detailed information about its algorithmic model from a report produced by the Financial Crisis Inquiry Commission that includes extensive interviews conducted under oath with Moody’s executives who were involved in the business at the time.

Our analysis enabled us to identify the 'four sources of algorithmic inertia'.

1 Buried assumptions 

Failing to revisit fundamental assumptions undergirding inputs of the algorithmic model in light of changes in the environment contributes significantly to algorithmic inertia.

For example, loan originators were increasingly underwriting mortgages based on lower credit standards and substantially less documentation than before.

So an original assumption undergirding Moody’s M3 Prime model — that the technology that mortgage originators were using to streamline the loan application process was also enabling a more accurate assessment of underlying risks — wasn’t modified to reflect the changing lending environment.

The managing director of credit policy at Moody’s told a federal inquiry panel that he sat on a high-level structured credit committee that would have been expected to deal with issues like declining mortgage underwriting standards, but the topic was never raised.

“We talked about everything but … the elephant sitting on the table,” he said.

Failing to revisit fundamental assumptions of a model in light of changes in the environment contributes significantly to algorithmic inertia.

Moody’s model also assumed that consumers’ credit scores were the primary predictive factor in loan defaults.

But the quality of this data input significantly diminished over time: as the use of these credit scores became increasingly common, individuals found ways to artificially inflate them. As a result, low- and no-documentation mortgages carried latent risk that was not being taken into account in Moody’s algorithmic model.

2 Superficial remodeling 

This phenomenon occurs when organisations make only minor modifications to the algorithmic model in response to substantive changes in the environment.

At Moody’s, some major changes to the environment included a growing number of loan originators, increasingly low-quality mortgages, and an unprecedented decline in interest rates.

Moody’s response to these changes was to seek to capture more business in the rapidly growing market, so it fine-tuned the model to be “more efficient, more profitable, cheaper, and more versatile,” according to its chief credit officer — not to be more accurate.

When it modified M3 Prime to introduce the M3 Subprime model, it extrapolated loss curves for subprime loans based on premium loans rather than developing fresh loss curves for subprime loans.

3 Simulation of the unknown future

Relying on an algorithmic model to produce viable scenarios for the future environment can also leave organisations vulnerable to algorithmic inertia.

Moody’s constructed a simulation engine featuring 1,250 macroeconomic scenarios that enabled it to estimate possible future losses based on variations in economic markers such as inflation, unemployment, and house prices.

However, the simulation engine was limited by its underlying structure and assumptions, so analysts did not consider the changes that were occurring, did not update scenarios, and failed to accurately represent the changing macroeconomic environment.

Based on the belief that detailed performance histories could more precisely reveal causal links between economic stresses and loan behavior, Moody’s used estimates based on historical parameters rather than expected pool loss distributions to examine behavior in stress scenarios.

4 Specialised compartmentalisation 

This situation arises when experts in different domains are involved in an algorithm’s design and use and there is no overarching single ownership or shared understanding of the model.

At Moody’s, responsibilities for the credit rating routine were divided between the domain experts (credit rating committee members) who used the quantitative model and the quantitative analysts who had developed it.

Because ownership and use of the model were distributed, and Moody’s didn’t strictly define how to use it, credit rating committee members established ad hoc rules to adjust the results of the model when its outputs didn’t conform to what their expert judgement led them to believe it ought to produce.

Model outputs weren’t considered final; rather, the models were seen as tools to be used in conjunction with other approaches, and there was much divergence in how different ratings committees made their determinations.

The models were developed and modified by individuals who were distant from the domains in which they would be applied; disparate groups of domain experts then used the models in inconsistent ways without understanding their underlying logic.

The managing director for rating RMBSs described the model as so technically complex that few people understood how it worked.

This issue is at the heart of what makes algorithmic inertia hard to tackle: the models and algorithms are often so complex that domain experts can hardly grasp the details of their functioning, while data scientists are disconnected from how their models are being used in the real world.

Four practices to combat algorithmic inertia

We have described how each of the causes of algorithmic inertia played out in Moody’s use of an algorithmic model to dynamically incorporate changes in the environment into its credit ratings.

Despite recognising flaws in the model and making active attempts to change it, the organisation was unable to effectively adapt to the environment, thereby substantially contributing to the 2008 financial crisis.

To prevent similar degradation of critical algorithms’ predictive value, we suggest that organisations implement these four practices:

1 Expose data and assumptions

Organisations should articulate and document the data used in their algorithmic models, including data sources and fundamental assumptions underlying their data selection decisions, which can have deleterious effects.

Models often include operationalisations of many concepts, and it is easy for companies to lose track of these parameters, which can be buried in layers of software code.

Parameters representing the environment need to be documented to ensure that they remain visible. Similarly, the fundamental assumptions undergirding the model should be articulated and periodically revisited.

Moody’s used a data set on premium mortgages to train a model that was intended to be used to rate RMBSs composed of subprime mortgages.

Initially, this might have been a reasonable choice due to the availability of data. But when a model’s initial data set isn’t refreshed, algorithmic inertia can result.

As the Moody’s case suggests, data is never completely accurate, objective, and flawless. Therefore, making the sources of data and assumptions about those sources transparent to algorithm users, and continually reflecting on the appropriateness of that data, are critical practices for organisations seeking to avoid algorithmic inertia.

Firms must keep data sources clearly organised and evaluate them periodically. Different data sources have different qualities and characteristics.

Ensuring that these sources are distinguished from one another before they are fed into algorithmic models, processed, and constantly compared against one another enables data scientists to identify and eliminate algorithmic inertia sooner rather than later.

The assumptions underpinning the use of an algorithm should also be documented and articulated.

Any attempt to model the environment involves quantification — transforming aspects of reality into numerical data. Such quantification inevitably involves making assumptions about how the environment works.

However, while quantification is necessary for algorithmic models to work, details about how it is performed can get lost in the complex process of designing and using algorithmic models. Therefore, maintaining a living record of such assumptions may prevent the emergence of algorithmic inertia.

2 Periodically redesign algorithmic routines

Organisations should regularly redesign — and be willing to overhaul — their algorithmic model and reconsider how it fits into broader organisational routines.

The initial design of an algorithmic model can take a lot of work, and it is natural for an organisation to want to reap the benefits of that work.

However, in a dynamic and quickly changing environment, it’s important to be willing not just to make incremental changes to a model but to fundamentally overhaul it if necessary.

Of course, organizations face a trade-off when it comes to overhauling an algorithmic routine: It can be very expensive to completely re-architect an algorithmic model. However, the consequences of failing to do so can be disastrous.

For example, when Moody’s had to rate an increasing number of subprime-dominated RMBSs, it chose to incrementally modify the M3 Prime model.

However, it may have been more effective to specify the distinctions between the prime and the subprime markets and do a deeper overhaul of the original model.

In addition to rethinking the algorithmic model itself, an organisation can consider how it is deployed in practice: hypothetically, Moody’s could have applied the M3 Prime model differently to different types of RMBSs — perhaps simply requiring more human intervention for tranches composed of lower-quality loans.

Redesigning and overhauling an algorithmic model is contingent upon understanding what organisational processes interrelate with the model and analysing the implications that changes in the environment have for it.

If it becomes clear that either the model or the processes that it relies on or feeds into have been rendered obsolete or ineffective, an overhaul should be seriously considered.

3 Assume that the model will break 

It can be dangerous for an organisation to think of potential future scenarios only through the prism of what algorithmic models predict: all assumptions embedded in a model limit the potential futures that can be considered.

To address algorithmic inertia associated with the simulation of an unknown future, it is important to assume that the model will break.

Consider scenarios beyond the scope of the algorithmic model; this requires challenging predictive assumptions as well as presuming that the model is fundamentally flawed.

An active practice of considering scenarios that are outside the model can help motivate and inspire the prior two practices — exposing data and assumptions and periodically redesigning algorithmic routines — by forcing team members to actively consider the limitations of algorithmic models.

It can be dangerous for an organisation to think of potential future scenarios only through the prism of what algorithmic models predict.

One particularly useful approach might be to make use of qualitative predictions of the future instead of quantitative predictions that rely on available data from the past.

These forms of scenario planning offer opportunities to consider radically different visions of what the future may hold. This might also entail developing hybrid algorithms that do not precisely rely on past data to predict scenarios but also embed in them qualitative measures and expert rules introduced by domain experts.

4 Build bridges between data scientists and domain experts 

Organisations must create processes for data scientists and domain experts to work closely together to design their algorithmic routines.

Practically speaking, data scientists and AI specialists approach problems very differently than domain experts do. Domain experts focus on organisational routines and idiosyncratic situations, whereas data scientists focus on developing generalisable constructs based on mathematical principles.

To overcome algorithmic inertia, data scientists and domain experts must work closely together to understand how characteristics of organisational routines and idiosyncratic situations map to the mathematical parameters used in an algorithmic model.

When the worlds of data scientists and domain experts are completely separate, there is also the danger that they will shift responsibility by superficially trusting each other’s work. Such assumptions can actually prevent crucial dialogue between the two worlds.

For instance, Moody’s ended up subverting the results of its credit rating model because the credit rating analysts didn’t attempt to understand why the model might be generating results that didn’t fit with their intuitions.

Building bridges enables domain experts to obtain an intuitive grasp of how the algorithmic model works. Such common ground could enable organisations to create and use models that better adapt to changes in the environment.

One structural bridge-building practice that organisations can use to facilitate communication between data scientists and domain experts is establishing a position such as a product manager.

This should be held by one individual with both domain and data science experience who has direct responsibility for overseeing algorithmic routines.

For example, some data experts have called for the creation of a new organisational structure that includes an 'innovation marshal' role — someone who is respected by both data scientists and field experts.

Given their knowledge and expertise in both areas, these people can gain the respect of the organisation by developing and maintaining high-bandwidth, bidirectional communication channels that help ensure that algorithmic routines are able to adapt to environmental changes.

Another bridge-building practice is called 'model explainability': describing the algorithmic models in a practical and comprehensible manner.

For data scientists, such explainability can facilitate access to the expert knowledge needed to counteract the sources of algorithmic inertia; for domain experts, such explainability can help them develop a deep and intuitive understanding of how the model takes environmental changes into account.

Model explainability establishes common ground between two groups of professionals who have different types of expertise. Such practices enable organisations to build bridges instead of just talking about them.

Organisations seeking to reap the benefits of powerful predictive analytics are increasingly confronting the problem of algorithmic inertia.

Despite leveraging dynamic algorithms to adapt to changes in the environment, organisations may find that the results are not keeping pace with new developments.

By exposing data and assumptions, periodically redesigning algorithmic routines, assuming that their models will break, and building bridges, organisations can increase the likelihood that their substantial investments in algorithmic solutions will pay off with better decision-making.

This is republished from MIT Sloan Management Review

Further reading:

How banks can use data analytics to identify early signs of financial trouble

Algorithmic management: Learning from Uber's woes

Algorithmic Routines and Dynamic Inertia: How Organizations Avoid Adapting to Changes in the Environment

 

Omid Omidvar is an Associate Professor of Organisation and Work and teaches Organisational Behaviour on the Full-time MBA. He also lectures on Leading and Managing Change on MSc Management and Management in Practice on the Undergraduate programme.

Vern Glaser is an Associate Professor in the Department of Strategy, Entrepreneurship and Management at the University of Alberta.

Mehdi Safavi is a Senior Lecturer in strategy and organisation in the Strategy Group at Cranfield School of Management.

Learn more about making your organisation withstand a rapidly-changing environment on the four-day Executive Education course Leading an Agile and Resilient Organisation.

For more articles on Finance and Markets and the Future of Work sign up to Core Insights here.