This is an article by Dene Bebbington, who is a freelance writer.
Catastrophism is ever popular, especially in the last few years. Particularly since the EU referendum campaign our society has been suffering a malaise of constant worry about potential catastrophic events such as Brexit, climate change, and now the Covid-19 pandemic. What all three have in common is that mathematical models underpin the predictions. The tendency towards saturation media coverage, which is often misleading, has further skewed and damaged public understanding of these issues.
In the run up to the EU referendum economic forecasts said we could experience a recession if the vote was to leave the EU. It was claimed that unemployment would rise significantly, sterling would fall, inflation would rise, house prices would fall, etc. Though prolonged political uncertainty following the vote for Brexit did impact the economy, we didn’t enter a recession and the only parts of the forecast that came true were a fall in the value of sterling and the rise in inflation. Despite that some people still cling to the idea they can quantify how much GDP growth fell by, despite the difficulty in knowing the counterfactual.
The then-chancellor George Osborne claimed that in fifteen years every family would be £4300 a year worse off. Even short term economic forecasts are usually inaccurate and have to be refined throughout the year. For him to put a figure on GDP after fifteen years is so laughable that it was clearly a political exercise. Besides that, the figure conflated per capita GDP with household income.
Predictive failure is no surprise since economic forecasting has a patchy track record. Economist Prakash Loungani studied its record and discovered that forecasts failed to predict 148 of 150 recessions. The summary of a 2018 paper for the IMF he wrote with two other people puts the failure of forecasting into relief:
We describe the evolution of forecasts in the run-up to recessions. The GDP forecasts cover 63 countries for the years 1992 to 2014. The main finding is that, while forecasters are generally aware that recession years will be different from other years, they miss the magnitude of the recession by a wide margin until the year is almost over. Forecasts during non-recession years are revised slowly; in recession years, the pace of revision picks up but not sufficiently to avoid large forecast errors. Our second finding is that forecasts of the private sector and the official sector are virtually identical; thus, both are equally good at missing recessions. Strong booms are also missed, providing suggestive evidence for Nordhaus’ (1987) view that behavioral factors—the reluctance to absorb either good or bad news—play a role in the evolution of forecasts.
A spectacular example of the limits of forecasting was the failure to predict the 2008 financial crisis (though some people had warned of a potential crisis).
Since the referendum, and perhaps long before, sections of the population have been in a perpetual angst from thinking the worst. Recently there’s been the rise of Greta Thunberg and Extinction Rebellion asserting that we’re all doomed unless carbon dioxide emissions are drastically reduced. I accept the scientific consensus on climate change, but I don’t take climate predictions as gospel. Science is a process of continual discovery and refinement, and it’s not as if there haven’t been failed climate predictions.
The so-called ‘Climategate’ scandal in 2009 about the work of the Climatic Research Unit (CRU) at the University of East Anglia shone a light on academic coding standards, amongst other things. Their modelling software was so badly written that a programmer couldn’t reproduce previous results, and encountered many coding and database problems. Given the recent revelations about Neil Ferguson’s software for his flu pandemic model, there’s reason to suspect that poor standards of software engineering in academia are endemic.
Mathematical models can be useful in some circumstances – if they’ve been rigorously reviewed and tested, and everyone involved understands their assumptions and limitations. The reliability of short-term weather forecasts is a good example. Yet we shouldn’t fall into the trap of believing that all models are necessarily good at predicting the future just because they have mathematics which most of us may not be able to understand. Instead of lapsing into a belief in models like a form of 21st-century divination, we should remain ever sceptical and only accept the highest standards in producing them.
There seem to be four key elements of a useful predictive model:
1. The mathematics and algorithm
2. The theoretical assumptions
3. The input data
4. The software that implements the model.
Even if (1) is sufficiently correct, (2), (3) and (4) must also be correct. Climategate and reviews of Ferguson’s code throw (4) into serious doubt for their models. We can also doubt whether (2) and (3) are correct. For example, Ferguson predicted that Sweden would have a much higher number of Covid-19 deaths if they didn’t enact a lockdown rather than the restrictions they carried on with. Sweden’s deaths have stubbornly refused to match his prediction.
Going back nearly two decades we find Ferguson’s work was instrumental in the government’s response to the 2001 Foot and Mouth outbreak. The result was a mass slaughter of millions of cows and sheep, and the suicide of several farmers. Subsequent work by a Professor of Veterinary Epidemiology has claimed that Ferguson’s model was severely flawed. Ferguson himself acknowledged that he was working in real time with limited data. This is surely a reason to doubt the use of a model in that situation, even if we don’t know what would have happened if the modelling was ignored and a different approach to containing the disease been taken.
Whenever the output of models have been used in public policy making, all those involved should be held accountable. We need to consider whether potentially bad predictions are better than no prediction, especially since several models have, unfortunately, become inextricably linked with political and moral worldviews.
A rewritten version of Ferguson’s pandemic model was recently made available, but at the time of writing this the original code has still not been disclosed. Several reviews of the code have been written by experienced software engineers who have identified problems in it. One of those reviews was picked up by the Telegraph newspaper who reported this response from Imperial College:
The UK Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial research team we use several models of differing levels of complexity, all of which produce consistent results. We are working with a number of legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated commentators.
Epidemiology is not a branch of computer science and the conclusions around lockdown rely not on any mathematical model but on the scientific consensus that COVID-19 is a highly transmissible virus with an infection fatality ratio exceeding 0.5pc in the UK
More details can be found towards the end of this piece at Lockdown Sceptics.
It’s revealing that Imperial’s response doesn’t address the specific criticisms of the code except to handwave them away by a comment that epidemiology is not a branch of computer science. However, their pandemic model is implemented as a computer program and can therefore be examined for bugs by any suitably experienced programmer. As a defence it’s as fatuous as saying that construction principles don’t apply when building a hospital.
Besides, if they’re now claiming that the pandemic model didn’t have any input to the decision to lockdown, then why did the SAGE report reference their model?