Median Watch

Eyes on statistics

Here’s why you should (almost) never use a pie chart for your data

Reproduced from The Conversation Our lives are becoming increasingly data driven. Our phones monitor our time and internet usage and online surveys discern our opinions and likes. These data harvests are used for telling us how well we’ve slept or what we might like to buy. Numbers are becoming more important for everyday life, yet people’s numerical skills are falling behind. For example, the percentage of Year 12 schoolchildren in Australia taking higher and intermediate mathematics has been declining for decades.

Statically significant

A colleague sent me a draft manuscript with the typo “statically significant”. A typo that passes a spell check but would surely not pass reviewers and editors? Oh dear, a PubMed search reveals that it has snuck past reviewers and editors, many many times. There are 975 abstracts that have used this nonsense phrase. There should be a celebration for the 1000th paper! {width=80%,height=80%} Surely that’s only in the terrible journals though?

Publication bias or research misconduct?

In my talk on bad statistics in medical research, I showed the infamous plot of Z-values created by Erik van Zwet. A version of the plot made with David Borg is shown below. The sample size is over 1.1 million Z-values. {width=450px} The two large spikes in Z-values are just below and above the statistically significant threshold of ± 1.96, corresponding to a p-value of less than 0.05. The plot looks like a Normal distribution that’s caved in.

Celebrate hard science

Two weeks ago I gave a fun online talk on statistics for the Young Scientist Forum of the German Society for Biomaterials. I had some great chats with the organisers and there were good questions from the audience. One good question was about how to interpret the analysis results when things are not clear cut. During my presentation I had talked about not deleting difficult outliers and not relying on p-values to give a falsely certain interpretation of what their results mean.

A year without p-values

One year ago after another stupid fight with a journal about p-values, I made a pledge to go without them for a year. Here’s how it went. But first, why? I am aware of the arguments for and against p-values. I have used p-values for a long while and they can be a useful statistic. The reason I ditched them is because almost nobody in health and medical research interprets them correctly, wrongly thinking they reveal the probability that the null hypothesis is true (other misinterpretations are available).