I had been meaning to write a primer on false positives for ages, because it’s a topic I knew well from my risk institute days, but then a few other websites and magazines started writing about it, so I thought there was no point. (I am slightly kicking myself for not getting in earlier with this.) But there still seems to be a lot of confusion around about false positives, and I thought it may still help if I write a really clear, basic and fairly non-technical guide to them (as some of the explanations I’ve seen aren’t as clear as they could be, or cover the topic fairly quickly).
A ‘false positive’ test result is a test result that comes back positive when the patient isn’t really positive. (The concept of this is easy enough to grasp; this isn’t the difficult bit.) There are various ways this can happen, and how it can happen will vary depending on the type of test it is. For example, the test may sometimes mistake a bit of a different virus for the virus it is supposed to be looking for. (There’s a list of things that can cause a false positive, as well as a false negative, here.) But we don’t need to go into this issue here.
The next concept to grasp is that of the ‘false positive rate’. This is where the main misunderstandings arise. It’s natural to think this refers to the percentage of positives that are false. On this mistaken understanding, if you have, say, a false positive rate of 1%, and you have 100 positive results, then 99 of those are true positives, and only 1 out of the 100 is a false positive. That is is what Matt Hancock appeared to think the term meant when he was interviewed by Julia Hartley-Brewer on Talk Radio.
This isn’t what it refers to, though. It actually refers to the percentage of people who get tested who will be given a ‘false positive’ result. So if you have a false positive rate of 1%, and you test 100 people, you will get one false positive. That may not sound that different to the previous case, but it is, which you can see when you consider the number of false positives you will get when you test large numbers of people. Say you test 10,000 people. 1% of 10,000 is 100. So for every 10,000 tests you do, there will be 100 false positives.
This isn’t that much of an an issue when a disease is rampant and you have a large percentage of people testing positive when they really have got the disease (‘true positives’). For example, say you test 10,000 people, and 3,000 test positive. The fact that 100 of that 3000 are false positives won’t usually make much difference to policy in that scenario. 2900 or 3000, doesn’t really matter. There’s lots of people with the disease either way; the true positives swamp the false positives.
But when a disease is dying out, or is rare, then things are different. Suppose you test 10,000 people for X. Suppose X is rare at this point, and only 1 in 1000 people have it (ie. 0.1%, or to put it as a decimal fraction, 0.001). That means that we can expect to get 10 true positives from that 10,000. But if our false positive rate is 1%, then we will also get (as we saw above) 100 people getting false positives. That means out of our 10,000 tests, we get 110 positives, but only 10 of those are true positives. 100 out of the 110 positives – 91% – are false positives.
We can draw what’s called a ‘frequency tree’ to make this even easier to understand.
This is the sort of situation we currently face with SARS-CoV-2 (the virus that causes Covid). According to the ONS’s most recent estimate (18-24 September), only 1 in 500 people in the community (0.2%) is infected with SARS-CoV-2. So if we test 10,000 people we can expect that 20 will actually have it, and they will test positive (let’s assume there are no false negatives, in order to keep things simple). But then there are the false positives. The false positive rate for SARS-CoV-2 seems to be slightly less than 1% – that’s what Matt Hancock reported when he was interviewed recently by Julia Hartley-Brewer on Talk Radio. Let’s say it’s 0.9%. That means that if you test 10,000 people you will get 90 false positives.
That means you end up with 90 false positives plus 20 true positives, which added together gives you 110 positives. 90 out of 110 is 82%. 20 out of 110 is 18%. So less than one-fifth of the positives tests are real. So you can see that the issue of false positives is not just of academic interest. Most of the so-called ‘cases’ you’re currently reading about are actually just false positives.
This wouldn’t matter if we were being sensible about Covid, which is, after all, not a dangerous disease for most people, and one which presents no threat whatsoever to society. But of course we’re not being sensible about it. The UK is, along with many other countries, pursuing a virtual zero-SARS-2 strategy, and freaking out when the virus shows any signs of slightly increasing in prevalence (even though the prevalence is, according to the ONS, extremely low). That makes the false positive issue a serious one. Massively important decisions are being made by the likes of Matt Hancock on the basis of positive test numbers, despite the fact that he fundamentally misunderstands the false positive issue
Moreover, the false positive issue means that there will always be positive tests occurring even if in reality the virus has disappeared completely. And that means that we can never get out of this mess on the current rules.
Take a look at this graph that Christopher Bowyer recently did of the rate of English tests that are positive.
From the left-hand side of the graph (which starts in mid-July) we can see that the percentage of positive tests is actually less than 1% all the way through to the start of September. Only in September does it rise above 1%, getting up to 2.5% by the end of September. In summer then, Covid barely existed. Most of the positive tests were false positives. Not all, though; the ONS’s seroprevalence surveys tell us that there was a small amount of it around in summer (about 1 in 2000 people, or 0.05%), and seeing as it started rising again in autumn it clearly hadn’t died out completely. But most were. Even now, the great majority of tests are false positives.
So that’s a basic introduction to this topic, which I hope makes it clearer for those who weren’t that sure about what it was all about.
Appendix A (if you’re keen on some of the complications):
I have made some simplifications here to make the basic concepts easier to grasp. One I should mention is that I have ignored the fact that in theory Pillar 1 tests (the ones for those in hospital) should have a higher rate of people who actually have the virus than the community prevalence, because you’d expect more people in hospital to have it than a random sample of people in the community, especially in spring when there were far more people with Covid in the hospitals.
With Pillar 2 (community testing) you should in theory also see the higher rate of Pillar 1 because in theory only people who are symptomatic are supposed to get tested. But as we know many asymptomatic people in the community got tested regardless, so in reality the number of Pillar 2 subjects with the disease will actually be more like the background prevalence.
On false positive rates, I should point out that for Pillar 1 tests the UK government was claiming that the false positive rate was 0.4%, not 0.9%. It’s Pillar 2 tests that are supposed to have a false positive rate of about 0.9% (some say 0.8%). I ignored this for simplification purposes, and it doesn’t change the general picture. It is probably correct that the false positive rate for Pillar 1 tests is more like 0.4%, because the positive rate for the tests all summer was around this.
Another issue is that we don’t really know for sure what the false positive (and negative) rates are for various tests. (And that’s not even mentioning issues like how many cycles should a PCR test be run for, or the fact that the vast majority of people with a positive test result, even a true one, are completely fine). If you’ve read this far you’ll appreciate that a lack of reasonable certainty on these issues puts a very large spanner in the works. If you have a Spectator subscription you can read what Prof Carl Heneghan has to say on this issue here, and Dr Clare Craig here.
Something else that might be useful for you to know: the true positive rate is sometimes called the ‘sensitivity’, and the true negative rate is sometimes called the ‘specificity’.
A good book that covers the false positive issue in general is Reckoning With Risk: Learning to Live With Uncertainty by Gerd Gigerenzer. (The issue of false positives is especially crucial to deciding when cancer screening makes sense.) The really amazing thing about Gigerenzer’s book, though, is that he and his team uncovered the fact that the great majority of doctors and public health officials and politicians didn’t have a clue about false positives and what it really means and how to calculate with it. Hancock is not alone in his ignorance (although given his position and the context his ignorance is inexcusable).
If you want more detail on the false positive issue specifically in regard to SARS-2, try Michael Yeardon’s article on Lockdown Sceptics.
Update: Please support this website, if you are able, by donating via KoFi, subscribing via SubscribeStar or Patreon, or buying my book (see right-hand sidebar for links). Free independent media like Hector Drummond Magazine, and my constantly updated Twitter and Parler accounts, cannot survive without the financial support of my readers.