Put a number on results

Let’s require that any researcher making a claim in a study accompany it with their estimate of the chance that the claim is true — I call this a confidence index. As well as, “This drug is associated with elevated risk of a heart attack, relative risk (RR) = 2.4, P = 0.03,” investigators might add: “There is an 80% chance that this drug raises the risk, and a 60% chance that the risk is at least doubled.”

How sure are you of your result? Put a number on it
https://www.nature.com/articles/d41586-018-07589-2

I don’t see how adding a subjective and self serving assessment improves the science. I could also see how there could be liability issues as well.

Then will come to origin of “confidence number” hacking to complement “p-value” hacking.

2 Likes

Bayesian statistics allows these sorts of statements, but you have to be careful about making strong prior assumptions.

Under a prior assumption that the alternative hypothesis has a 50% chance of being true, a p-value of 0.05 give a posterior probability for the alternative hypothesis of LESS than 50%. I think. Going from memory here, but I will look for that paper…
I think we could do something along these lines that would give casual readers a much better idea of the strength of results.

Were working on that one already. An independent record of the primary hypothesis prior to looking at the data is a good idea.

I’m working on an exploratory analysis today which has a very “flat” likelihood, meaning it’s hard to pin down a single best estimate. The result we are reporting is somewhat tentative as a result. BUT, my experience here tells me we are onto something, and the investigator thinks this will change medical practice. IOW, it’s important to report, and even if it isn’t exactly right, it’s close enough to be useful. p=0.004, so not too bad. :slight_smile:

All you have to do is exclude viable alternative hypothesis with impoverished imagination, and magically your confidence increases. There is no good way around this.

You can have explicitly stated prior distributions which make it clear what conclusions are allowed, recognition that all models are wrong at some level, and demand replication by independent sources. That still doesn’t fix everything, but it’s a step in the right direction.

I completely agree with the sentiment, but I am not sure if it would work in practice. At least in my view, the reliability of the results has as much to do with how the study is set up as it is the statistics. I don’t think the casuals will appreciate the nuances.

True, and there are limits. The other day I was trying to politely explain to one of our physicists that 5-sigma is not a reasonable standard in biomedical sciences. I only gently hinted that the method of applying that standard in physics is also flawed To my knowledge, they do not use planned stopping rules for interim analysis, leading to inflated type I error (higher than expected by 5-sigma).

Overall, Bayesian methods are mostly ignored in the biomedical sciences. That needs to change, but it may take another crisis to make it happen.