Maybe hilarity is a bit strong. I've just been looking at the recently published (in JAMA) Guidelines for the Content of Statistical Analysis Plans in Clinical Trials. Most of the action is in the eAppendix2 "Explanation and Elaboration of Essential Items," where they go through the essential items in exhaustive and sometimes mind-numbing detail. It's … Continue reading Unintentional hilarity in statistical analysis plan guidelines
Via Twitter, I came across a blog post by Dr John Mandrola (here) on the VEST trial, whose results have recently been presented at the American College of Cardiology Annual Scientific Session. The trial evaluated a wearable cardioverter-defibrillator in patients after myocardial infarction (conference abstract (without results) here). Dr Mandrola seems not to like the … Continue reading The Vest
Bayesian trial in the real world
This post arose from a discussion on Twitter about a recently-published randomised trial. Twitter isn’t the best forum for debate so I wanted to summarise my thoughts here in some more detail. What was interesting about the trial was that it used a Bayesian analysis, but this provoked a lot of reaction on Twitter that … Continue reading Bayesian trial in the real world
Language, confidence intervals and beliefs
People often speak and write about values of treatment effects outside their confidence intervals as being “excluded.” For example; “the risk ratio for major morbidity was 0.98 (95% CI 0.91, 1.06), which excluded any clinically important effects.” I just made that up but you often see and hear similar statements. What understanding do people take from … Continue reading Language, confidence intervals and beliefs
Best sample size calculation ever!
I don't want to start obsessing about sample size calculations, because most of the time they're pretty pointless and irrelevant, but I came across a great one recently. My award for least logical sample size calculation goes to Mitesh Patel et al, Intratympanic methylprednisolone versus gentamicin in patients with unilateral Meniere's disease: a randomised, comparative … Continue reading Best sample size calculation ever!
I found a paper in a clinical journal about confidence intervals. I’m not going to give the reference, but it was published in 2017, and written by a group of clinicians and methodologists, including a statistician. Its main purpose was to explain confidence intervals to clinical readers – which is undoubtedly a worthwhile aim, as … Continue reading Confidence (again)
Trial results infographics
There is a fashion for producing eye-catching infographics of trial results. This is a good thing in some ways, because it’s important to get the results communicated to doctors and patients in a way they can understand. Here’s one from the recent WOMAN trial (evaluating tranexamic acid for postpartum haemorrhage). What’s wrong with this? To my mind … Continue reading Trial results infographics
The future is still in the future
I just did a project with a work experience student that involved looking back through four top medical journals for the past year (NEJM, JAMA, Lancet and BMJ), looking for reports of randomised trials. As you can imagine, there were quite a lot - I'm not sure exactly how many because only a subset were … Continue reading The future is still in the future
One of the consequences of the perceived need for a “primary outcome” is that people try to create a single outcome variable that will include all or most of the important effects, and will increase the incidence of the outcome, or in some other way allow the sample size calculation to give you a smaller … Continue reading Hospital–free survival
Here’s a photo of a slide from a talk by Doug Altman about hypothesis tests and p-values recently (I nicked the picture from Twitter, additions by me). I wasn’t there so I don’t know exactly what Doug said, but I totally agree that hypothesis testing and p-values are a massive problem. Nearly five years ago … Continue reading Rant