We’ve just published the world’s first randomised trial of funding (paper available here). We ran a truly novel study, using a gold standard study design, and with a published protocol. So why did it take two years to get through peer review?
Too much care
Our team also recently published a large randomised trial about reducing unnecessary care at the end of life (paper available here).
A completely different area to funding, but again an important research question, with a strong study design, and a peer reviewed protocol. This paper also struggled to pass peer review, although it only took a year.
“Negative” results
Both trials were “negative”, meaning they failed to show statistical significance for the primary outcome. Research funding had no clear effect on researchers' publication numbers, and our intervention in hospitals did not reduce unnecessary care.
Almost all our peer reviewers wanted different results.
For the modelling paper the reviewers requested the following:
- Fit a different regression model (2 different models)
- Change the data for the primary outcome (4 different alternatives)
- Change the variables adjusted for in the regression model
- Add a new outcome
- Remove a pre-specified outcome
- Collect more data
So the reviewers were giving us a green light to p-hack and misreport.
End of life
For the end of life study, some reviewers told us that our intervention was never going to work, which felt strongly of hindsight bias. The referees also said that we chose the wrong primary outcome and should include different outcomes.
Well-known bias
The bias of reviewers against “negative” studies is well-known, and was cleverly illustrated by this randomised trial of peer review.
These experiences have been a reminder of why some researchers leave their “negative” results in the file drawer. Some may not even bother trying, after previous bad experiences. Others might try, but give up after getting the same “wise after the event” comments from peer reviewers.
It’s another reason to wonder what service journals are providing. I appreciate that there are journals that welcome “negative” results, although their reviewers don’t always get the message.
The open peer review of preprints by MetaROR is needed now more than ever.
Footnote: Our trials were not perfect
Of course, I am biased about our group’s work and think that everything we do should sail through peer review.
Neither trial was perfect. The funding trial was smaller than we hoped and was complicated by repeat applications. Our end of life trial was hit by COVID and our power calculation was way off.
I am happy to read valid criticism of our papers – another reason for open peer review. Applying my own hindsight, both trials should have been registered reports.