8.12 – Calculating Effect Size
When researchers find a “statistically significant” result, often shortened to a “significant” result, it should be noted that the word “significant” does not refer to the size of the treatment effect, difference, or correlation. Instead, it refers to the probability of there being a treatment effect, difference, or correlation. It means that the result of the test statistic (e.g., the z-score) is “noteworthy” (a synonym for “significant”) because there is a very low probability that it could have happened due to chance.
However, knowing the size of the treatment effect, difference, or correlation would also be very important for researchers and clinicians. In fact, it is possible for researchers to find a “significant” treatment effect, but also find that the treatment effect might be barely noticeable. For example, researchers might test a pain-reducing medication and find that a reduction in participants’ pain was probably due to the medication (a significant treatment effect), but the amount of pain reduction was so small as to be almost unnoticeable, such as a 0.10 point reduction on a 10-point pain scale (a small treatment effect size). As a result, even though there was a statistically significant effect, this particular pain-reducing medication would hopefully not go to market because it does not seem to have a strong effect on pain.
Feedback/Errata