Tags
Cambridge, Canonization Effect, Citation Bias, Climate Change, Climatology, Lee Jussim, Medical Science, Model Calibration, National Oceanic and Atmospheric Administration, Pandemic, Political Bias, Psychology Today, Publication Bias, Repication Crisis, Reporting Bias, Spin

The prestige of some elements of the science community has taken a beating during the pandemic due to hugely erroneous predictions, contradictory pronouncements, and misplaced confidence in interventions that have proven futile. We know that medical science has suffered from a replication crisis, and other areas of inquiry like climate science have been compromised by politicization. So it seemed timely when a friend sent me this brief exposition of how “scientific myths” are sometimes created, authored by Lee Jussim in Psychology Today. It’s a real indictment of the publication process in scientific journals, and one can well imagine the impact these biases have on journalists, who themselves are prone to exaggeration in their efforts to produce “hot” stories.
The graphic above appears in Jussim’s article, taken from a Cambridge study of reporting and citation biases in research on treatments for depression. But as Jussim asserts, the biases at play here are not “remotely restricted to antidepressant research”.
The first column of dots represent trial results submitted to journals for publication. A green dot signifies a positive result: that the treatment or intervention was associated with significantly improved patient outcomes. The red dots are trials in which the results were either inconclusive or the treatment was associated with detrimental outcomes. The trials were split about equally between positive and non-positive findings, but far fewer of the trials with non-positive findings were published. From the study:
“While all but one of the positive trials (98%) were published, only 25 (48%) of the negative trials were published. Hence, 77 trials were published, of which 25 (32%) were negative.“
The third column shows that even within the set of published trials, certain negative results were NOT reported or secondary outcomes were elevated to primary emphasis:
“Ten negative trials, however, became ‘positive’ in the published literature, by omitting unfavorable outcomes or switching the status of the primary and secondary outcomes.“
The authors went further by classifying whether the published narrative put a “positive spin” on inconclusive or negative results (yellow dots):
“… only four (5%) of 77 published trials unambiguously reported that the treatment was not more effective than placebo in that particular trial.“
Finally, the last column represents citations of the published trials in subsequent research, where the size of the dots corresponds to different levels of citation:
“Compounding the problem, positive trials were cited three times as frequently as negative trials (92 v. 32 citations. … Altogether, these results show that the effects of different biases accumulate to hide non- significant results from view.“
As Jussim concludes, it’s safe to say these biases are not confined to antidepressant research. He also writes of the “canonization effect”, which occurs when certain conclusions become widely accepted by scientists:
“It is not that [the] underlying research is ‘invalid.’ It is that [the] full scope of findings is mixed, but that the mixed nature of those findings does not make it into what gets canonized.“
I would say canonization applies more broadly across areas of research. For example, in climate research, empirics often take a back seat to theoretical models “calibrated” over short historical records. The theoretical models often incorporate “canonized” climate change doctrine which, on climatological timescales, can only be classified as speculative. Of course, the media and public has difficulty distinguishing this practice from real empirics.
All this is compounded by the institutional biases introduced by the grant-making process, the politicization of certain areas of science (another source of publication bias), and mission creep within government bureaucracies. In fact, some of these agencies control the very data upon which much research is based (the National Oceanic and Atmospheric Administration, for example), and there is credible evidence that this information has been systematically distorted over time.
The authors of the Cambridge study discuss efforts to mitigate the biases in published research. Unfortunately, reforms have met with mixed success at best. The anti-depressant research reflects tendencies that are all too human and perhaps financially motivated. Add to that the political motivation underlying the conduct of broad areas of research and the dimensions of the problem seem almost insurmountable without a fundamental revolution of ethics within the scientific community. For now, the biases have made “follow the science” into something of a joke.