Positive COVID-19 tests continue to mount, which is scary, but the more I learn about the processes generating the data, the more skeptically I regard the numbers. And whether the data is junk or otherwise, it’s often misinterpreted or misused by the media. Here I’ll focus mainly on issues related to testing and cause-of-death. What’s striking is the likelihood of upward bias in the reported case counts based on one set of tests, even while the incidence of antibodies to the virus appear to be more widespread.

Testing

Almost all of the C19 test results included in the case counts we’ve seen are from polymerase chain reaction (PCR) tests, the kind involving samples collected with “brain scrambling” nasal swabs. These tests detect whether the subject is shedding any viral particles. The other kind of test is for antibodies, called a serological test, which focuses on whether a subject has HAD the virus, not on whether they have it currently. The latter test, however, might catch some active cases in addition to resolved infections.

The first problem is that some states have combined results from these two kinds of tests. That’s likely to inflate the case count because they would capture those who are infected, and those who were infected but aren’t any longer.

A second problem is the faulty reporting of test results we’ve seen in states like Florida, where some labs have been reporting an implausible 100% positivity rate over certain periods. That might or might not imply an exaggerated count of positives, but it certainly inflates the positivity rate. There are other practices that systematically inflate the positive test count, however, such as counting all members of a household as “probable positives”, and counting multiple positive tests on the same patient as multiple cases.

Test reliability

This is the third problem and it’s really a biggie. It’s also more complicated because there is more than one kind of accuracy on which tests are evaluated.

1) The PCR tests are said to have a sensitivity of anywhere from 66%-80%, depending on testing and lab conditions. That means about one of every three or four tests on infected people will miss the actual infection: that’s a false negative and a horrible mistake. An article in The New England Journal of Medicine puts sensitivity at 70%. These levels of sensitivity are poor, so there is good reason for repeat testing, or to develop and implement more sensitive tests!

2) The other kind of accuracy is called specificity. It indicates the percentage of uninfected subjects who actually test negative. If it’s 90%, then one out of every ten tests identifies an infection that really isn’t there. That’s a false positive. It’s extremely hard to find estimates of specificity for PCR tests outside of perfect lab conditions. We know there are false positives in the real world, however, and I’ll get to that evidence below. But we know, for example, that individuals will continue to shed virus for a short time even after the virus is dead, and that reduces specificity. False positives can also result from poor testing or lab conditions.

So here’s an example: let’s be generous and assume that test sensitivity is 80%, and we’ll give the benefit of the doubt to test specificity and say it’s 95%. Further suppose that 2% of the population is currently infected. Out of 1,000 tests, 20 involve infected subjects. The sensitivity implies that we’ll correctly identify 16 of them (80%) and we’ll miss four. The other 980 tests subjects are virus-free, but 95% specificity implies that about 49 of those tests will come back positive (49/980 = 5%). All together, that yields a whopping positivity rate of:

(49 + 16)/1000, or 6.5%, well above the true infection rate of 2%.

So it’s very easy for a test having inadequate specificity to inflate the number of positives. That’s less problematic when prevalence is high, since fewer virus-free subjects are available to misidentify. Unfortunately, it becomes a larger concern when testing is broad and less focused on symptoms, since that implies lower prevalence in the tested population. The U.S. has increased testing over the past two months, roughly quintupling the number of daily tests over a span of three months. The tested population has therefore broadened to include many more subjects who are either asymptomatic, freaked out about their allergy symptoms, or have been routinely tested on admittance to hospitals for other illnesses or procedures.

Discussion

It’s absolutely necessary for society to have testing capacity for those with symptoms and those likely to be exposed to the virus, such as first responders. But rolling out the test to the broader population means the case data are much less accurate unless positive diagnoses are based on repeated tests. Unfortunately, the bulk of the testing we’ve seen thus far has been so lacking in specificity as to inflate the number of cases as testing became more widespread.

Evidence for this claim is offered by a paper just published by a Connecticut epidemiologist. He used a more robust technique to re-examine ten positive and ten negative tests provided by the CT Department of Public Health. He found that nine of the 20 cases were true infections, but two of those came from the ten negative tests! So, in fact, there were three false positives and two false negatives among the 20 tests. Therefore, the tests overestimated the number of actual cases by around 11% in the sample, net of both kinds of errors. Granted, this was a small sample, and we don’t know the true prevalence of the virus in the full population of test subjects, but if we assume the positive tests and negative tests were representative, a prevalence of 5% would imply, after weighting, a rather drastic inflation of the positivity rate to 32.5%!

0.95(3/10) + 0.05(8/10)

That’s just outrageous!

The U.S. positivity rate by the end of April was about 12%, when testing was still limited; it’s been running at about 8% recently. The decline almost certainly reflects both a broadening of the test population and a decline in prevalence among the tested population. That, in turn, implies that a positive test has less predictive value, for even though the test captures the same percentage of true positives, a larger percentage of all positive tests will be false negatives. It might seem paradoxical, but it’s likely that the 12% positivity rate early in the pandemic had a smaller upward bias than the 8% we see currently, but that is due to the composition of the population tested. Under current testing, the specificity percentage is applied to a larger proportion of uninfected subjects, so the number of false positives overwhelms the test’s ability to identify true positives.

The first priority of testing is to reliably identify true cases. The current PCR tests fail in that objective due to low sensitivity. But inspecific test are costly too. First, they waste medical resources on uninflected subjects. Second, a major set of worries and inconveniences are imposed on false positives, which have a real cost. Third, inspecific tests can be costly because of the even higher likelihood of false positives over several rounds of tests. For example, it will be extremely difficult for sports teams to establish continuity, or even to maintain a full roster, because so many players are likely to become victims of false-positivity under repeated testing.

Death tolls

Anything that inflates the C19 case count tends to inflate C19-attributed deaths. For example, almost all hospital admissions are now tested. A high number of false positives leads to more deaths being wrongly attributed to C19. Other issues related to counting deaths go beyond the vagaries of test accuracy. Hospitals have a perverse incentive to boost their C19 cases and deaths via more generous Medicare reimbursements. Deaths are also attributed to C19 in a variety of other circumstances, some quite suspicious, but we are constantly told without evidence that C19 deaths are undercounted and so these additions must be reasonable.

The argument that deaths of C19 patients with comorbidities are rightfully attributed to C19 is likewise flawed for some of the reasons discussed above. False positives are all too common. Furthermore, patients might be admitted to a hospital with advanced or terminal conditions and die having caught C19 coincidentally at the hospital. And one can certainly quibble with the notion that the deaths of otherwise terminal patients should be attributed to C19. There is a significant grey area.

Finally, as I discussed in a previous post, the deaths reported each week are at odds with the actual timing of those deaths. There are occasionally large additions to the CDC’s provisional deaths counts many weeks in the past. It’s bad enough that those deaths are reported so late and treated by the media as if they just occurred. Possibly worse is the potential for manipulating death counts for political purposes, which is enabled by the large backlog of deaths lacking attributed causes over the course of weeks and months.

Serological tests and false positives

The first serological tests for C19 antibodies, back in April, yielded surprisingly high estimates of individuals with acquired immunity to the virus, often 10 or more times the number of infections based on case counts (also see here and here). The earliest antibody test results were criticized because their specificity and the prevalence of antibodies in the general population were thought to be low. That made it relatively easy for critics to rationalize the high estimates as a consequence of false positives. We now know, however, that serological tests have higher specificity than the PCR tests for active infections, and those tests have consistently shown a larger than expected share of individuals having acquired immunity. But how does that square with the argument that case counts based on PCR tests are inflated? How can so many have developed antibodies if the case counts are so exaggerated?

To rephrase: how can the population with antibodies, those who have HAD the virus, accumulate to a level several times the case count? Keep in mind that a high proportion of the serological tests have been conducted in relative hot spots, where there are likely to be many undetected cases. There is also some question about the real timing of the pandemic in the U.S. Some believe it was spreading prior to March, so the true number of cases, diagnosed and undiagnosed, might have mounted more quickly early in the pandemic than later case diagnoses suggested. Moreover, serological testing has not been conducted on a random sample of the population. In fact, those tests are more often administered when patients go to labs for other blood work, so there is reason to believe that prevalence in this group might exceed that of the general population. It’s also possible that the serological tests are picking up antibodies developed in response to other forms of the coronavirus, which might in fact be protective. Finally, the serological tests are still subject to a level of false positives. So the antibody findings from serological tests are not necessarily inconsistent with the notion that case counts and death counts are inflated now.

Summary

We truly need better, quicker tests, and many talented people are now working to improve them. My point is not to degrade the effort to conduct testing, but to note that our current testing regime has many flaws, one of which is to raise alarm about extremely high case and death counts. I do not doubt that the number of actual infections has grown in June and July. However, the positivity rate remains lower than early in the pandemic with a much larger, less focused selection of test subjects. Many of the cases identified by PCR tests are false positives. As disappointing as it is to someone who loves to work with data, C19 case counts and mortality look unreliable.