Erica’s research focusses on subtle issues affecting the use of mathematical models and simulations in aid of real-world decision making. In her own words,

“The aim of the project is to explore the epistemic foundations of modelling methods, model evaluation metrics, and to consider how expert judgement can be used both to construct and calibrate models. Models of complex systems are also complex. How can we robustly estimate uncertainty in decision-relevant model output? Is there a wider paradigm for uncertainty assessment which reflects the formal inapplicability of Bayesian methods but does not revert solely to arbitrary expert judgements? And how can we account for the co-development of expert judgement with the models themselves?”

In her new role, Erica will focus on research examining a number of distinct but closely related topics. Some themes of particular interest include (with some further description in each case offered by Erica):

**Model land is not the real world**

The map is not the territory: we make models precisely because the domain itself is not amenable to certain kinds of exploration. An economist may make a model to explore parallel worlds of policy intervention; a climate scientist may make a model to explore uncharted territory of greenhouse gas forcing; a neurophysiologist may make a model to avoid ethical constraints of *in vivo* experimentation. In making models, we rely on judgements about which features are important to the situation and which can be neglected, and many choices are possible. If models are to be used to inform real-world decision-making, it is important to have some idea of how good the model is at reproducing the behaviour of the system, and even to assess this quantitatively. “All models are wrong, but some are useful.”

**Every statistical method is an epistemology**

Formal statistical analyses of model outputs imply certain assumptions about the nature of the model, and its relationship with the real world. For example, Bayesian methods of model analysis assume that the system itself is within the class of model structures considered. The use of ensemble or Monte Carlo multi-model methods to generate probability distributions assumes that “parallel universes” in model space help overcome our lack of information in our single unparameterised universe. Some of these methods appear to work when the model and/or data are sufficiently imperfect, but with further inspection would be subject to a *reductio ad absurdum*. For example, if the system were a dog, and your set of models only included various breeds of cat, selecting one model just gives you the model the cat that is most like the dog, but gives no hint about fundamental structural deficiencies of the set of models. Where out-of-sample data are available for evaluation, some methods can be given a bill of health; others would benefit from clearer warnings.

**The Hawkmoth Effect**

The Butterfly Effect is well-known, reflecting the sensitivity to initial conditions displayed by some dynamical systems. Because of dynamical instability, a small perturbation to initial conditions can result in a large change to the state of the system after some length of time. The Hawkmoth Effect, by analogy, refers to structural instability within models, whereby a very small difference between a model and the true underlying system can result in a large difference between the behaviour of the model and the system after some length of time. In other words, a modeller might be arbitrarily close to the correct equations, yet still not be close to the correct solutions. Consequences for modelling and simulation include problems of calibrating complex models and the difficulty of identifying “improvements” in quantitative performance with “improvements” in physical representation.

**Models and expert judgements are co-developed**

When developing a complex model of a complex system, how does a domain expert decide which elements to include or leave out? In part, this decision is *a priori* judgement; in part, it is informed by experimentation on the model itself. The behaviour of the model shapes the expectations of the modeller, which in turn shape the further behaviour of the model.

**Working with models**

Making good decisions based on information from models or simulations is not as straightforward as we might like. How can we proceed in this uncertain situation? First, it is necessary to be clear about the aims of the modelling endeavour, and to define well-targeted evaluation procedures to judge the adequacy of the model for given aims. Second, these evaluations must be carried out. Third, the evaluation process itself must be used as an integral part of model development and criticism, so that internal sources of uncertainty can be quantified and external sources are acknowledged and estimated. Only then can the model be used with confidence to inform decision support.

## Leave a Reply