## Computing number needed to treat from control group recovery rates and Cohen’s d

Furukawa and Leucht (2011) give a  formula for calculating the number needed to treat (NNT), i.e., (p. 1)

“the number of patients one would need to treat with the intervention in question in order to have one more success (or one less failure) than if treated in the control intervention”

based on the control group event rate (CER; for instance proportion of cases showing recovery) and Cohen’s d – an effect size in standard deviation units.

R code below:

```NNT = function(d, CER) {
1 / ( pnorm( d - qnorm(1-CER) ) - CER )
}```

Reference

Furukawa, T. A., & Leucht, S. (2011). How to obtain NNT from Cohen’s d: comparison of two methods. PloS one, 6(4), e19070.

## Different notions of “effect size”

Tired of people equating “effect size” with “standardised measure of effect size”? Here’s an antidote, thanks to Shinichi Nakagawa and Innes C. Cuthill (2007). [Effect size, confidence interval and statistical significance: a practical guide for biologists. Biol. Rev. (2007), 82, pp. 591–605.]

They review the different meanings of “effect size”:

• “Firstly, effect size can mean a statistic which estimates the magnitude of an effect (e.g. mean difference, regression coefficient, Cohen’s d, correlation coefficient). We refer to this as an ‘effect statistic’ (it is sometimes called an effect size measurement or index).
• “Secondly, it also means the actual values calculated from certain effect statistics (e.g. mean difference = 30 or r = 0.7; in most cases, ‘effect size’ means this, or is written as ‘effect size value’).
• “The third meaning is a relevant interpretation of an estimated magnitude of an effect from the effect statistics. This is sometimes referred to as the biological importance of the effect, or the practical and clinical importance in social and medical sciences.”

They argue in favour of confidence intervals, as these “are not simply a tool for NHST [signifcance testing], but show a range of probable effect size estimates with a given confidence.”

They also cite Wilkinson, L & The Task Force on Statistical Inference (1999) [Statistical methods in psychology journals. American Psychologist 54, 594–604]:

“our focus on these two standardised effect statistics does not mean priority of standardised effect statistics (r or d) over unstandardised effect statistics (regression coefficient or mean difference) and other effect statistics (e.g. odds ratio, relative risk and risk difference). If the original units of measurement are meaningful, the presentation of unstandardised effect statistics is preferable over that of standardised effect statistics (Wilkinson & the Task Force on Statistical Inference, 1999).”

Good stuff, this.