For a recent observational study I tried to limit the use of p-values in the paper. My colleagues wanted more p-values and I had to politely push back. During one team meeting I even offered to put the p-values in if someone could accurately tell me what they meant … silence.
Predicting that the reviewers would also want to see more p-values, I added this sentence to the paper’s methods: “We have tried to limit the use of p-values, as they are often misunderstood or misinterpreted, and elected to discuss clinically meaningful differences.” Followed by a citation to Steve Goodman’s excellent “dirty dozen” paper on p-values (Goodman 2008).
The Statistical Editor didn’t like my sentence and said (sic): “The authors should compare the various groups and report p-values. The readership of the AJRCCM is experience enough to not misunderstand.”
The journal rejected our paper, so feel free to interpret what follows as sour grapes.
Based on this latest experience and my exhaustion with dealing with p-values, I am making a pledge:
If my co-authors insist on p-values, then I will take my name off the paper. I’ll try to get a statement in the acknowledgments like, “Adrian Barnett would have been an author, but it was him or the p-values.”
If journal editors or reviewers insist on p-values, then I will try to convince them otherwise and then either publish elsewhere or take my name off the paper.
Is this a gimmick? Possibly, but I’m really tired of people insisting on using a statistic that they don’t understand. And “using” isn’t the right word, as the word I want would mean: “to abandon all other scientific thinking and any other available evidence.” Perhaps “significancing?”
I’m hoping this one big decision will save me from a lot of little decisions.
Can I do it? A dozen months without a dirty dozen. Going cold Tukey (sic). I’ll report back in a year.
I reserve the right to use a Bayesian or bootstrap p-value.
I can’t change papers in print.