Economy & Business
The use and abuse of models
"There are lies, damned lies and statistics"; there are also models, damned models and lies. Sophisticated mathematical models are widely used and can be of enormous help, but not communicating their limitations properly only benefits deniers and sceptics.
By Lee Faulkner
There are several variations of the famous quote about the use and abuse of statistics, and it seems somewhat uncertain as to who said it and why; my favourite is that "statistics don't lie but liars use statistics". A set of accurate and verifiable statistics can often be used, or spun, to back up points of view that appear completely contradictory. That isn't the fault of the statistics - it's the fault of the people who use and/or abuse them. And it's not just politicians who are guilty - many companies mislead us by quoting the bits of a set of statistics that make them look good and leaving out the bits that don't.
In this regard, mathematical models and statistics are the same - they are both useful and prone to wanton misinterpretation and abuse. There is an extra dimension with models though - understanding and communicating their limitations - and climate change models are a classic illustration of this. A lack of understanding about what a model can and can't do, and an inability to communicate these limitations clearly, can seriously weaken a proponent's argument. In the case of climate change, it is unwittingly giving deniers the ammunition they shouldn't be getting.
What is a mathematical model?
Mathematical models, as opposed to the "catwalk" variety, probably sound more boring than algebra, the lifecycle of the nematode and a two-hour speech from Xi Jinping put together and believe me they are (I'm an actuary - I'm allowed to say that). But it's not the model itself but what you do with it that counts (heard that line before anyone?). Models are embedded in pretty much every part of life today, from projecting the path of Covid-19 outbreaks to the solvency of insurance companies to climate change.
In its rawest form a model is a set of inputs and outputs. The inputs are assumptions about the future behaviour of certain things, for example the level of interest rates that central banks might set, or the amounts of CO2 dumped into the atmosphere each year. The outputs are ranges of possible outcomes if the assumptions were to be borne out in reality, for example this bank might go bust, or that planet might suffer catastrophic flooding. A model also needs a "black box", that is a set of interdependent equations that processes the inputs into outputs (I can see you nodding off already, but bear with me).
Assumptions
An assumption is exactly what it says on the tin - you assume something, but you can't know for sure whether your assumption will be right or even close. Assumptions are nothing more than educated guesses, but that doesn't mean they're worthless - focus on the "educated" part of the definition not the "guess" part.
An epidemiologist modelling the possible outcomes of a Covid-19 variant outbreak makes assumptions based on previous variant outbreaks, how humans and the viruses behaved, what has changed in the interim, and what new facts we have to hand now. After time has moved on and we have more recent, and perhaps more relevant data, we update our guesses. That is what actuaries do when we analyse the solvency of banks and insurance companies - we look at what happened before, what the data shows about how things have changed, and then update our guesses. And that's what climate change scientists do - they make educated guesses and update them when more information comes in.
A denier might say: "Ah, it's all guesswork then - I suspected as much! Your so-called model is pure invention and even less credible than I thought!". The smart retort is to ask if they realise how ubiquitous the making of assumptions is in their everyday lives. Are they prepared for their savings to be lost because we can't make assumptions to model a bank's solvency? Or for there not to be enough school places because we can't make assumptions about fertility rates?
Outputs - ranges
The outputs, arrived at after the assumptions have bred with the black box, are projections of ranges of possible outcomes - they are not point estimates. Nor are they predictions - they are projections, and I'm not playing with semantics here. The word "prediction" implies a much greater level of certainty that is both inappropriate and unwarranted; for example, if I say "I predict that the weather in 2050 will be hotter" I am making it sound much more probable than if I said the more nuanced "the central point of a range of a set of projections shows that the planet will be hotter".
Outputs are then grouped to arrive at the probabilities of certain outcomes happening; for example, "based on our assumptions and model we project the planet will be hotter with a probability of 95%". With banks and insurance companies we say that they are 99.5% likely to remain solvent even if a 1-in-200-year catastrophe were to occur.
The key point to note is that the outcomes are ranges, not point estimates, and they have likelihoods attached. This is why I despair when I see statements like "the climate will be 1.5˚C hotter if we don't do this, that and the other" i.e., completely inappropriate precision with no probabilities mentioned at all.
It is also important to grasp what we actuaries call "the funnel of doubt" - the farther away in time you go the wider the range of possible outcomes. For example, if the central banks were to raise interest rates today, we can be fairly confident of what might happen next year - the range of probable outcomes will be quite narrow - but in ten years' time? All bets are off, and the range is very wide. Climate models are very long-term projections indeed, so are fraught with funnel doubt.
Black boxes
Finally, onto black boxes, the favourite whipping boy of sceptics and deniers: "You can't see what's going on in the black box, and it's all designed to justify your point of view, so it's all an insidious plot" they cry. I chose the term deliberately as people know what it means, but it's not like that at all (and, yes, maybe I should now choose another name, but that will confuse matters).
Mathematical models can be extremely complex but that doesn't mean they are some kind of deep-state intrigue. If you have a background in mathematics and statistics, and you know how to code, and you've got the time and energy, then you can peer review any model and give an opinion about its usefulness and appropriateness. To bleat that it's all a dystopian plot to let the world be run by experts is just lazy - it says, "I can't be bothered to think." If a climate change denier can produce models that are peer reviewable and produce outputs that run counter to contemporary orthodoxy, then we have an obligation to review them and debate them; but "it's not happening because I can't be bothered to check" has no right to any debating airtime.
Communicating isn't easy but it's worth it
Communication is hard - explaining how complex mathematical models work, how to interpret their outputs and what their limitations are is a real challenge, and I do not underestimate that. But it's vitally important if we want to convert conclusions from models into action plans that everyone will buy into. This takes time and patience, but short-cutting, particularly with not communicating the limitations of models, is counterproductive in the longer term and feeds scepticism and denial. We have got to take the time to explain what a model is, how it works, how it should be interpreted and what its limitations are. If we do that then the sneering at "experts" by anti-vaxxers, climate deniers and religious fundamentalist will diminish. People don't like being patronised, and rightly so, but they won't feel like that if we speak a language they can understand.
The key part of any professional's duty is to be able to communicate with lay people - that means explaining things in language they can understand using examples they can relate to but without patronising them. As an actuary, my professional duty is to explain uncertainty and to make sure that it is appreciated and understood. Any actuary who doesn't do that isn't an actuary. Just because I have a limited amount of time or column inches to explain uncertainty is no excuse for not doing it.
Proper communication will not only disarm deniers - it will stop feeding them ammunition completely unnecessarily.
Climate change models aren't accurate or static
Climate change is in the news on a daily basis now, and that's not likely to change. Much of the debate is about "modelling" and how these "models" inform public debate and policy formation. However, much is lost from these debates because we're not communicating their purpose correctly. I often see models defined as being "accurate" but there is no such thing.
All good models are dynamic, not static - they are continually updated as more data and events come through. And the interpretations of a model aren't static either - scientists never conclude anything with absolute certainty - they change their conclusions when more evidence comes to light that refutes what they concluded before. Do deniers do that?
If we say that "Model X predicts with a high degree of accuracy that Taipei will be 2°C warmer by 2070" we are misleading people - we are ascribing faux "accuracy" where, by definition, there is none, and we are making predictions with a certainty and permanence that a model can never give you. Nuancing your message and explaining the limitations of your model might not be as punchy or headline-grabbing, but it won't mislead.
So how do we deal with model deniers?
We should cut off deniers at source by ceasing to abuse our own models - we have got to stop making bold statements our models can't support. And we must acknowledge uncertainty - yes, that will very likely open up the charge that "as you don't know for sure why do anything now?" but it will also open up the counter-charge that a denier needs to back up their own argument i.e., for NOT doing anything. Is a denier going to tell us not to take our heart medication because we haven't already had a heart attack? Or not to model demographics because the ageing crisis isn't a problem today?
We have to crush the comment that models are useless because they're based on guesses i.e., assumptions. The world is full of educated guesses - the fact that they will almost certainly be wrong, or "right" only by fluke, doesn't mean that we don't need to make them or can survive without doing so. We must stress that assumptions are continually updated, not random picks to patronise and take over the world.
We should open our black boxes and ask them to challenge them, then ask them for theirs. We need to explain the funnel of doubt. We must challenge them to come up with their own robust models demonstrating contrary ideas, but dismiss laziness and refuse to debate with it.
We need to explain that models never give "the right answer" because there simply isn't one; all they can do is give ranges of possible outcomes. We have to be careful with language and stress that models don't "predict" anything - they inform, but they don't predict.
Most importantly we must explain uncertainty - we don't own the truth any more than a denier does.
Lee Faulkner is a Fellow of the Institute and Faculty of Actuaries, the UK's actuarial body, and has more than 30 years' experience in the world of financial services in Asia, Europe and Latin America. He is a Taiwan Gold Card holder and now lives in Taipei.