My research interests broadly encompass the use of modelling and simulation methods to inform real-world decisions, with a particular focus on information and decisions related to weather, climate, and climate change. I am interested in the epistemic foundations of modelling methods, model evaluation metrics, and the expert judgement that is used both to construct and calibrate models. How can we robustly estimate uncertainty in decision-relevant model output? Is there a wider paradigm for uncertainty assessment which reflects the inapplicability of Bayesian methods (which imply that the system itself is within the class of model structures considered) but does not revert solely to arbitrary expert judgements? And how can we account for the co-development of expert judgement with the models themselves (if my model does not do what I expect, then I fix it; but my expectations have also been constructed partly by experience of the model)?