Median Watch

Eyes on statistics

Breadcrumbs

Breadcrumbs feature in the fairy tale Hansel and Gretel where two lone children drop them along their journey into the forest so that they can find their way home. I’ve been thinking a lot about breadcrumbs and paper trails because the research world is rapidly moving to widely using artificial intelligence, including for generating questions, collecting data, and writing papers. Whilst these uses may sometimes be legitimate and make for faster and better research, it’s now also possible for bad actors to use AI to make a decent quality paper in just 30 minutes.

Research Grant System Teeters on the Cusp of an AI Hellscape

Reproduced from Future Campus.

Last week, I blocked out two hours of protected time in my diary for “grant writing”. I’ve done this before, but the difference this time was that at the end of two hours I had a nearly finished NHMRC Ideas Grant. Of course I used AI. I used PRISM, a new free tool from OpenAI designed for academic writing. I gave PRISM the Ideas Grant criteria, a document on what makes a good application, and a title and an aim.

Research integrity is locked into an arms race with agentic AI slop

Reproduced from the LSE impact blog.

Science prides itself on being self-correcting. While scientific fakery has always been a problem, cases of fraud have been isolated, and a combination of scepticism and scrutiny has up to now generally worked to highlight published papers that are unreliable. The 30 minute paper However, the world of research and publishing is changing. The introduction of agentic artificial intelligence (AI) allows an automated assembly line of research tasks without any human checkpoints.

On slowing down

Last month I wrote a piece in Nature to announce that I’m going to halve my research output. For me, the reasons are clear. Publication numbers are skyrocketing and are now being supercharged by researchers using AI to write papers. Many papers are now created simply to pad CVs and have no scientific value. The scientific community needs to be careful with the limited resource that is peer review and must focus on quality over quantity.

Testing baseline tables in trials for signs of fraud

When fraudsters make up research data, they can make mistakes. Real data is rich and complex whilst fraudsters are on a get-rich-quick scheme and make slapdash errors. One mistake they make is in randomised trials, where it’s standard to have a baseline table that compares the randomised groups. As the groups are randomised, the summary statistics should be similar. Fraudsters have no sense of ‘similar’ and so have created data where the groups are nearly identical.