Median Watch

Eyes on statistics

Statically significant

A colleague sent me a draft manuscript with the typo “statically significant”. A typo that passes a spell check but would surely not pass reviewers and editors? Oh dear, a PubMed search reveals that it has snuck past reviewers and editors, many many times. There are 975 abstracts that have used this nonsense phrase. There should be a celebration for the 1000th paper! {width=80%,height=80%} Surely that’s only in the terrible journals though?

Publication bias or research misconduct?

In my talk on bad statistics in medical research, I showed the infamous plot of Z-values created by Erik van Zwet. A version of the plot made with David Borg is shown below. The sample size is over 1.1 million Z-values. {width=450px} The two large spikes in Z-values are just below and above the statistically significant threshold of ± 1.96, corresponding to a p-value of less than 0.05. The plot looks like a Normal distribution that’s caved in.

Celebrate hard science

Two weeks ago I gave a fun online talk on statistics for the Young Scientist Forum of the German Society for Biomaterials. I had some great chats with the organisers and there were good questions from the audience. One good question was about how to interpret the analysis results when things are not clear cut. During my presentation I had talked about not deleting difficult outliers and not relying on p-values to give a falsely certain interpretation of what their results mean.

A year without p-values

One year ago after another stupid fight with a journal about p-values, I made a pledge to go without them for a year. Here’s how it went. But first, why? I am aware of the arguments for and against p-values. I have used p-values for a long while and they can be a useful statistic. The reason I ditched them is because almost nobody in health and medical research interprets them correctly, wrongly thinking they reveal the probability that the null hypothesis is true (other misinterpretations are available).

Dear p-values, it's not me, it's not you, it's everyone else

Yet another p-value run-in. For a recent observational study I tried to limit the use of p-values in the paper. My colleagues wanted more p-values and I had to politely push back. During one team meeting I even offered to put the p-values in if someone could accurately tell me what they meant … silence. Predicting that the reviewers would also want to see more p-values, I added this sentence to the paper’s methods: “We have tried to limit the use of p-values, as they are often misunderstood or misinterpreted, and elected to discuss clinically meaningful differences.