Escape from model-land

Policy makers in areas ranging from public health to weather forecasting or economics rely on mathematical models to inform their decisions. As models grow more complex and powerful, one might expect, they should contribute to better decisions. But that’s not always the case, as LML External Fellow Erica Thompson and colleague Leonard Smith note in a recent paper. Unfortunately, mathematical models often lure those using them into the illusion of being more certain than they really are.
The trouble, as they describe it, is the allure of living in “model-land” – a hypothetical world in which mathematical models can be compared and their imperfections known with perfect accuracy. Inside this fairy-tale world, optimising a simulation invariably reflects desirable pathways in the real world. Typically, however, something is lost in the move from model back to reality, which explains why model-inconceivable “big surprises” are far too common. In their paper, Thompson and Smith survey this problem and emphasize the importance for modelers to remain aware of just how different model-variables are from the real-world phenomena they are meant to reflect.
As they note, adding detail to a model in many cases makes it less accurate and less useful. When is a model good enough to support a particular decision? Thompson and Smith argue that this depends on the decision as well as the model, and is particularly relevant when the decision to take no action at some moment could carry a very high cost.
There are two ways,  at least, to escape from the hazards of model-land. The first, when possible, is to repeatedly challenge a model to see how well it performs on out-of-sample predictions. But in many cases, the lead time of interest may be far longer than the lifetime of a model, and out-of-sample evaluation is not possible. In such cases, the authors argue, escape from model-land generally requires expert judgment, informed by the realism of simulations of the past. Critically, this means being very clear about the known limitations of today’s models. When decisions rest more on expert judgment, it is likely that models with not-too-much complexity will be the most intuitive and informative, and reflect their own limitations most clearly. In this setting, common to many real world problems, less can often be more.
The paper is available at:

Leave a Reply

Your email address will not be published. Required fields are marked *