26.6.11

Clinical Significance versus Statistical Significance

The Skeptic's Health Journal Club: Of Ear Lobe Creases and Heart Disease

A positive study means there is a difference between two or more groups that is greater than one would expect to see by random chance alone. So, for instance, in a prospective study one finds all ten of the the treated people get well and none of the untreated people got well, something one wouldn't expect from chance alone.

But what if 4/10 treated people got well could this be chance? What if 6/10 treated people and 4/10 untreated people got well? Statistical tests then are applied to see how likely it was that the difference that showed up between the two groups was simply random chance, that is to say what is the chance that a difference in groups was just chance.

This is commonly expressed as the p-value. So one will see statements along the lines of "the difference was statistically significant p < 0.01". This implies (excluding issues of bias and whether the correct statistical test was done) that "there is a less than 1% chance that there would be this much of a difference between the two groups based on a random occurrence. If the p value is 0.05 this means there is a 5% chance that there really was no effect from the treatment and it was just chance that more people got well in the treatment group. By convention a p-value of less than 0.1 is described as "tending towards significance" while a p-value of less than 0.05 is considered statistically significant or a positive study.