Interview: Erica Thompson

LML Fellow Erica Thompson is a senior policy expert at the London School of Economics’ Data Science Institute. She is also the author of a fascinating new book, Escape from Model Land, which was published in the UK by Basic Books on 24 November. In the book, Erica surveys scientists’ broad reliance on both simple and complex mathematical models in areas ranging from weather prediction to finance, and explores some of the seductive problems arising in their use, especially when scientists come to trust their models a little too much – opening them up to overconfidence and poor reasoning.

Any scientist involved in the use of models – and that’s pretty much all scientists – will find the book illuminating. It’s a useful guide to benefits and also the costs of using models, which scientists for the most part do with little deep thought.

To find out more about the background and the motivation for the book, I sat down with Erica on a short Zoom chat. Our discussion below has been edited lightly for clarity.

Mark Buchanan


Mark: I really enjoyed your book! I suppose I had thought previously in some superficial way about many of the issues you discuss, but you bring them into focus in a powerful way and give great advice about how to avoid the worst mistakes in using models. How did the idea for the book emerge out of your own work on climate modelling?

Erica: Well, I guess it was during my PhD which looked at North Atlantic storms in a changing climate. I started by doing a literature review of the different results that already existed, looking at what would happen to North Atlantic storms given different assumptions and different kinds of models. I looked at these different kinds of models and made tables of what they all showed.

I realized that actually you could find models that said the storm tracks were going to go northward or go southward or get stronger or get weaker, or that more intense storms would get more common or that more intense storms would get less common. I found anything and the opposite, and the models didn’t even agree within their own error bars.

So that for me was an interesting point. I sort of realized this is not telling me very much at all about North Atlantic storms, but it’s really telling me a lot about how we use models and how we make models. And about what we are doing when we try to assess uncertainty ranges.

What does it mean if we have gone through this process and ended up with uncertainty ranges that don’t even overlap? That was what started me off on the topic.

Mark: You’ve worked mostly in climate modelling, but your career has shifted in recent years to study modelling in a much more general sense.

Erica: Very much so. Since then I’ve looked at the process of model building and calibration in many areas of science and found that we have the same problems everywhere.

Mark: One of the most interesting things you touch on in the book is the role of emotion, and the psychological danger of falling in love with your models. It’s easy for scientists to find the wonderful world of the model seductively comfortable. In Model Land, everything is tidy and well defined, unlike the messy real world. It’s alluring to stay in that world and do comfortable analyses without having to face messy uncertainties.

To have some insight into that comfort, you must have had some personal experience of living in model land. Have you ever been trapped in model land? How did you manage to get out?

Erica: That’s a good question. During my PhD I was almost systematically trapped in model land because the remit of the PhD was to look at the models. It was puzzling when I realised quite early on that the models actually were intrinsically limited, and I wondered: how can I reconcile the differences in the projections that different models were making? I think at some point I realized that actually there is no way to do that mathematically within the framework of the models themselves.

Rather, you have to take a step back, step out of model, to do that. And yet that was really hard to do. There was no framework for how you do that.

Mark: Of course, some people spend years studying one model and that pretty much defines their career. And this is okay, it’s a legitimate part of science too.

On a different issue, during the COVID episode, many people in general, as well as scientists, were really surprised by how things became so politicized. Models in particular were swept up in the politicization. Were you surprised by that or was it kind of something you anticipated, given your prior thinking about models?

Erica: With a background in climate, one can’t be surprised by the politicization of models. It’s something that we’ve been used to for a long time. But it’s been interesting to see the same forces acting in the context of COVID, with the same kinds of science and anti-science and pseudoscience. The difference is that, with climate, things change slowly, whereas with COVID, things were evolving over days, weeks, months. The modellers quickly realized that they were in an extremely contested area and were going to have to deal with that.

It’s been interesting to see how the modellers had to come to terms with jumping out of model land and into the real world as the models were suddenly being used for policy decisions that somebody would have to justify and account for. It’s good that happened. It’s good practice and it allows other people to look at the models and improve on them.

Mark: The episode was an education for scientists, and probably also for the public, who normally doesn’t hear much about the normal scientific debate over models and their development and use. Do you think the public has a good understanding of what models are to scientists, or of how scientists use them? Or do they have important misconceptions?

Erica: There are lots of misconceptions, not only about models, but about science in general. But I think there’s been a big change in understanding, at least for the more numerate part of the  public. I think a lot of people have realized that models have complexities that hadn’t understood before they became part of this really public debate.

Mark: Speaking of complexities, this makes me wonder how things might change in the future. Over the last week or so, twitter’s been on fire about the chat bot GPT-3 and the new things these large-language models can do. Artificial intelligence is moving ahead very quickly, making it possible to speed up simulations by millions of times and things like that. I would expect that the nature of models will change as well.

I wonder: In a few years, AI may be making its own models that we’re not going to understand. How do you think that will change modelling?

Erica: Yes, I think AI models are the next frontier. And maybe there will be a point where we will have to make a decision about what the purpose of modelling is. For example, the GPT-3 stuff is really interesting because what it produces is very good text on many levels, but also really terrible. It generates literal bullshit and just comes up with things that superficially sound right. So it’s interesting to ask: what do we mean by a good output here? What is it for and what are we producing and why are we trying to produce it?

The same questions apply to other models. We may want a model to be more accurate in some way, and the AI may produce a model which is super fast and really good on the chosen metric. But do we understand how it is achieving that outcome? How might you be misled by it? Is it bullshitting you? Is it just looking plausible? There are risks associated with not understanding. And this is true not just for AI, but more generally, which makes the performance of many algorithms essentially un-explainable.

Mark: As we move into that future, I think, readers of your book will be better prepared than most to appreciate the important issues. Just one final question: Do you have a particular mental model that you use when thinking about models? Is there a space of possible models, and any sensible way we can reason about it?

Erica: There are a number of ways I could answer that question. The mathematical answer would be that the space of models is not an ordinary vector space. The space has infinite dimensions, an unaccountable infinity of dimensions, and this is what makes it so difficult to use statistical methods. We don’t have a finite space of possible models over which we could construct a probability distribution. As a result, even with an infinite number of samples, we can sample the full directions of model space.

So that’s maybe a mental model of what a model is. But I’d also say a model is a metaphor, and models act as metaphors and they can be useful and extended just as metaphors can be. But then they also eventually break in the same way as metaphors, too.

A key problem – and one of the big messages of my book – is that anyone working with models needs to understand how far you can push your metaphor before it becomes inapplicable.

Mark: Without the metaphors, we can’t even say anything. We need these tools just to be able to organize and communicate our thinking.

Erica: Yes, models help us to think. They help us to come up with new ideas. They generate insights. They help us to explore the logical consistency of our assumptions. They help us to generate narratives and stories around our ideas. And in that way they help us to communicate them to other people.

Mark: Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *