Median Watch

Eyes on statistics

Publishing "negative" results

We’ve just published the world’s first randomised trial of funding (paper available here). We ran a truly novel study, using a gold standard study design, and with a published protocol. So why did it take two years to get through peer review? Too much care Our team also recently published a large randomised trial about reducing unnecessary care at the end of life (paper available here). A completely different area to funding, but again an important research question, with a strong study design, and a peer reviewed protocol.

Statically significant

A colleague sent me a draft manuscript with the typo “statically significant”. A typo that passes a spell check but would surely not pass reviewers and editors? Oh dear, a PubMed search reveals that it has snuck past reviewers and editors, many many times. There are 975 abstracts that have used this nonsense phrase. There should be a celebration for the 1000th paper! {width=80%,height=80%} Surely that’s only in the terrible journals though?

Publication bias or research misconduct?

In my talk on bad statistics in medical research, I showed the infamous plot of Z-values created by Erik van Zwet. A version of the plot made with David Borg is shown below. The sample size is over 1.1 million Z-values. {width=450px} The two large spikes in Z-values are just below and above the statistically significant threshold of ± 1.96, corresponding to a p-value of less than 0.05. The plot looks like a Normal distribution that’s caved in.

Celebrate hard science

Two weeks ago I gave a fun online talk on statistics for the Young Scientist Forum of the German Society for Biomaterials. I had some great chats with the organisers and there were good questions from the audience. One good question was about how to interpret the analysis results when things are not clear cut. During my presentation I had talked about not deleting difficult outliers and not relying on p-values to give a falsely certain interpretation of what their results mean.

A year without p-values

One year ago after another stupid fight with a journal about p-values, I made a pledge to go without them for a year. Here’s how it went. But first, why? I am aware of the arguments for and against p-values. I have used p-values for a long while and they can be a useful statistic. The reason I ditched them is because almost nobody in health and medical research interprets them correctly, wrongly thinking they reveal the probability that the null hypothesis is true (other misinterpretations are available).