Researchers and policymakers tasked with planning for climate change often turn to so-called integrated assessment models (IAMs) to examine possible scenarios for Earth futures under different policy options. Such models aim to reflect the linkages and trade-offs between many factors including energy and land use, Earth climate, economic activity and development. The IPCC’s fifth assessment report distilled insights from 1134 scenarios from 30 global IAMs, and, more recently, the 2018 IPCC special report on global warming of 1.5 °C considered 411 scenarios from 10 global IAMs. Similar models are also used more directly in climate policy formulation, including the periodic global stocktake of progress under the Paris Agreement, international negotiations under the UNFCCC, and national strategies, targets and regulatory appraisals.
Of course, the usefulness of these analyses rests on the realism and accuracy of IAMs they employ. Evaluating these modelling tools means assessing both the models and their performance. IAM evaluation has a long history, including systematic multi-model comparison projects as well as ad-hoc evaluations performed by individual modelling teams. IAMs have also often been criticised for a range of perceived failings, including technological hubris, omitting drivers of sociotechnical change and understating future uncertainties. As IAMs have become more widely used for informing policy, the lack of a coherent and systematic approach to evaluation has become more conspicuous by its absence.
In a recent paper, LML Fellow Erica Thompson and colleagues offer a new synthesis of research on IAM evaluation, drawing on numerous examples across six different evaluation methods: historical simulations, near-term observations, stylised facts, model hierarchies from simple to complex, model inter-comparison projects (including diagnostic indicators) and sensitivity analysis. For each method, they review key milestones in historical development and application, and draw out lessons learnt as well as remaining challenges. The authors also propose four evaluation criteria – appropriateness, interpretability, credibility, and relevance – which they believe could help improve IAMs and their usefulness in policy planning. The researchers conclude by arguing for a systematic evaluation framework which combines the strengths of multiple methods to overcome the limitations of any single method.
The paper is available at https://link.springer.com/article/10.1007/s10584-021-03099-9
Leave a Reply