Privilege hazard

“The problems of gender and racial bias in our information systems are complex, but some of their key causes are plain as day […]. When data teams are primarily composed of people from dominant groups, those perspectives come to exert outsized influence on the decisions being made—to the exclusion of other identities and perspectives. This is not usually intentional; it comes from the ignorance of being on top. We describe this deficiency as a privilege hazard.”

– Catherine D’Ignazio and Lauren F. Klein (2020). Data feminism. MIT Press.


Kharkiv, statistics, and causal inference

As news comes in (14 May 2022) that Ukraine has won the battle of Kharkiv* and Russian troops are withdrawing, it may be of interest to know that a major figure in statistics and causal inference, Jerzy Neyman (1894-1981), trained as a mathematician there 1912-16. If you have ever used a confidence interval or conceptualised causal inference in terms of potential outcomes, then you owe him a debt of gratitude.

“[Neyman] was educated as a mathematician at the University of Kharkov*, 1912-16. After this he became a Lecturer at the Kharkov Institute of Technology with the title of Candidate. When speaking of these years he always stressed his debt to Sergei Bernstein, and his friendship with Otto Struve (later to meet him again in Berkeley). His thesis was entitled ‘Integral of Lebesgue’.” (Kendall et al., 1982)

* Харків (transliterated to Kharkiv) in Ukrainian, Харькoв (transliterated to Kharkov) in Russian.

Census data on trans and non-binary people in Canada

Canada published census data on trans and non-binary people on 27 April 2022. Here’s a table of the values they presented in a pie chart (why a pie chart, Canada?). Individuals in the census were aged 15 or above and living in a private household in May 2021.

Gender N %
Cis man 14,814,230 48.83
Cis woman 15,421,085 50.83
Trans man 27,905 0.09
Trans woman 31,555 0.10
Non binary 41,355 0.14
Total 30,336,130 100.00


Efficacy RCTs as survey twins

Surveys attempt to estimate a quantity of a finite population using a probability sample from that population. How people ended up in the population is somebody else’s problem – demographers, perhaps.

Survey participants are sampled at random from this finite population without replacement. Part a of the figure below illustrates. Green blocks denote people who are surveyed and from whom we collect data. Grey blocks denote people we have not surveyed; we would like to infer what their responses would have been, if they had they been surveyed too.

RCTs randomly assign participants to treatment or control conditions. This is illustrated in part b of the figure above: green cells denote treatment and purple cells denote control. There are no grey cells since we have gathered information from everyone in the finite population. But in a way, we haven’t really.

An alternative way to view efficacy RCTs that aim to estimate a sample average treatment effect (SATE) is as a kind of survey. This illustrated in part c. Now the grey cells return.

There is a finite population of people who present for a trial, often with little known about how they ended up in that population – not dissimilarly to the situation for a survey. (But who studies how they end up in a trial – trial demographers?)

Randomly assigning people to conditions generates two finite populations of theoretical twins, identical except for treatment assignment and the consequences thereafter. One theoretical twin receives treatment and the other receives control. But we only obtain the response from one of the twins, i.e., either the treatment or the control twin. (You could also think of these theoretical twins’ outcomes as potential outcomes.)

Looking individually at one of the two theoretical populations, the random assignment to conditions has generated a random sample from that population. We really want to know what the outcome would have been for everyone in the treatment condition, if everyone had been assigned treatment. Similarly for control. Alas, we have to make do with a pair of surveys that sample from these two populations.

Viewing the Table 1 fallacy through the survey twin lens

There is a common practice of testing for differences in covariates between treatment and control. This is the Table 1 fallacy (see also Dean Eckles’s take on whether it really is a fallacy). Let’s see how it can be explained using survey twins.

Firstly, we have a census of covariates for the whole finite population at baseline, so we know with perfect precision what the means are. Treatment and control groups are surveys of the same population, so clearly no statistical test is needed. The sample means in both groups are likely to be different from each other and from the finite population mean of both groups combined. No surprises there: we wouldn’t expect a survey mean to be identical to the population mean. That’s why we use confidence intervals or large samples so that the confidence intervals are very narrow.

What’s the correct analysis of an RCT?

It’s common to analyse RCT data using a linear regression model. The outcome variable is the endpoint, predictors are treatment group and covariates. This is also known as an ANCOVA. This analysis is easy to understand if the trial participants are a simple random sample from some infinite population. But this is not what we have in efficacy trials as modelled by survey twins above. If the total number of participants in the trial is 1000, then we have a finite population of 1000 in the treatment group and a finite population of 1000 in the control group – together, 2000. In total we have 1000 observations, though, split in some proportion between treatment and control.

Following through on this reasoning, it sounds like the correct analysis uses a stratified independent sampling design with two strata, coinciding with treatment and control groups. The strata populations are both 1000, and a finite population correction should be applied accordingly.

It’s a little more complicated, as I discovered in a paper by Reichardt and Gollob (1999), who independently derived results found by Neyman (1923/1990). Their results highlight a wrinkle in the argument when conducting a t-test on two groups for finite populations as described above. This has general implications for analyses with covariates too. The wrinkle is, the two theoretical populations are not independent of each other.

The authors derive the standard error of the mean difference between X and Y as

\(\displaystyle \sqrt{\frac{\sigma_X^2}{n_X} + \frac{\sigma_Y^2}{n_Y}-\left[ \frac{(\sigma_X-\sigma_Y)^2}{N} + \frac{2(1-\rho) \sigma_X \sigma_{Y}}{N} \right]}\),

where \(\sigma_X^2\) and \(\sigma_Y^2\) are the variances of the two groups, \(n_X\) and \(n_Y\) are the observed group sample sizes, and \(N\) is the total sample (the finite population) size. Finally, \(\rho\) is the unobservable correlation between treat and control outcomes for each participant – unobservable because we only get either the treatment outcome or control outcome for each participant and not both. The terms in square brackets correct for the finite population.

If the variances are equal (\(\sigma_X = \sigma_Y\)) and the correlation \(\rho = 1\), then the correction vanishes (glance back at numerators in the square brackets to see). This is great news if you are willing to assume that treatments have constant effects on all participants (an assumption known as unit-treatment additivity): the same regression analysis that you would use assuming a simple random sample from an infinite population applies.

If the variances are equal and the correlation is 0, then this is the same standard error as in the stratified independent sampling design with two strata described above. Or at least it was for the few examples I tried.

If the variances can be different and the correlation is one, then this is the same standard error as per Welch’s two-sample t-test.

So, which correlation should we use? Reichardt and Gollob (1999) suggest using the reliability of the outcome measure to calculate an upper bound on the correlation. More recently, Aronow, Green, and Lee (2014) proved a result that puts bounds on the correlation based on the observed marginal distribution of outcomes, and provide R code to copy and paste to calculate it. It’s interesting that a problem highlighted a century ago on something so basic – what standard error we should use for an RCT – is still being investigated now.


Aronow, P. M., Green, D. P., & Lee, D. K. K. (2014). Sharp bounds on the variance in randomized experiments. Annals of Statistics, 42, 850–871.

Neyman, J. (1923/1990). On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Statistical Science, 5, 465-472.

Reichardt, C. S., & Gollob, H. F. (1999). Justifying the Use and Increasing the Power of a t Test for a Randomized Experiment With a Convenience Sample. Psychological Methods, 4, 117–128.


Standard errors of marginal means in an RCT

Randomised controlled trials (RCTs) typically use a convenience sample to estimate the mean effect of a treatment for study participants. Participants are randomly assigned to one of (say) two conditions, and an unbiased estimate of the sample mean treatment effect is obtained by taking the difference of the two conditions’ mean outcomes. The estimand in such an RCT is sometimes called the sample average treatment effect (SATE).

Some papers report a standard error for the marginal mean outcomes in treatment and control groups using the textbook formula

\(\displaystyle \frac{\mathit{SD_g}}{\sqrt{n_g}}\),

where \(\mathit{SD_g}\) is the standard deviation of outcomes in group \(g\) and \(n_g\) the number of observations in that group.

This formula assumes a simple random sample with replacement from an infinite population, so does not work for a convenience sample (see Stephen Senn, A Standard Error). I am convinced, but curious what standard error for each group’s mean would be appropriate, if any. (You could stop here and argue that the marginal group means mean nothing anyway. The whole point of running a trial is to subtract off non-treatment explanations of change such as regression to the mean.)

Let’s consider a two-arm RCT with no covariates and a coin toss determining who receives treatment or control. What standard error would be appropriate for the mean treatment outcome? Let the total sample size be \(N\) and quantities for treatment and control use subscripts \(t\) and \(c\), respectively.

Treatment outcome mean of those who received treatment

If we focus on the mean for the \(n_t\) participants who were assigned to treatment, we have all observations for that group, so the standard error of the mean is 0. This feels like cheating.

Treatment outcome mean of everyone in the sample

Suppose we want to say something about the treatment outcome mean for all \(N\) participants in the trial, not only the \(n_t\) who were assigned to treatment.

To see how to think about this, consider a service evaluation of \(N\) patients mimicking everything about an RCT except that it assigns everyone to treatment and uses a coin toss to determine whether someone is included in the evaluation. This is now a survey of \(n\) participants, rather than a trial. We want to generalise results to the finite \(N\) from which we sampled.

Since the population is finite and the sampling is done without replacement, the standard error of the mean should be multiplied by a finite population correction,

\(\displaystyle \mathit{FPC} = \sqrt{\frac{N-n}{N-1}}\).

This setup for a survey is equivalent to what we observe in the treatment group of an RCT. Randomly assigning participants to treatment gives us a random sample from a finite population, the sample frame of which we get by the end of the trial: all treatment and control participants. So we can estimate the SEM around the mean treatment outcome as:

\(\displaystyle \mathit{SEM_t} = \frac{\mathit{SD_t}}{\sqrt{n_t}} \sqrt{\frac{N-n_t}{N-1}}\).

If, by chance (probability \(1/2^N\)), the coin delivers everyone to treatment, then \(N = n_t\) and the FPC reduces to zero, as does the standard error.


If the marginal outcome means mean anything, then there are a couple of standard errors you could use, even with a convenience sample. But the marginal means seem irrelevant when the main reason for running an RCT is to subtract off non-treatment explanations of change following treatment.

If you enjoyed this, you may now be wondering what standard error to use when estimating a sample average treatment effect. Try Efficacy RCTs as survey twins.

Sample size determination for propensity score weighting

If you’re using propensity score weighting (e.g., inverse probability weighting), one question that will arise is how big a sample you need.

Solutions have been proposed that rely on a variance inflation factor (VIF). You calculate the sample size for a simple design and then multiply that by the VIF to take account of weighting.

But the problem is that it is difficult to choose a VIF in advance.

Austin (2021) has developed a simple method (R code in the paper) to estimate VIFs from c-statistics (area under the curve; AOC) of the propensity score models. These c-statistics are often published.

A larger c-statistic means a greater separation between treatment and control, which in turn leads to a larger VIF and requirement for a larger sample.

Picture illustrating different c-statistics.

The magnitude of the VIF also depends on the estimand of interest, e.g., whether average treatment effect (ATE), average treatment effect on the treated (ATET/ATT), or average treatment effect where treat and control overlap (ATO).


Austin, P. C. (2021). Informing power and sample size calculations when using inverse probability of treatment weighting using the propensity score. Statistics in Medicine.

Two incontrovertible facts about RCTs

“… the following are two incontrovertible facts about a randomized clinical trial:

1. over all randomizations the groups are balanced;

2. for a particular randomization they are unbalanced.

Now, no [statistically] ‘significant imbalance’ can cause 1 to be untrue and no lack of a significant balance can make 2 untrue. Therefore the only reason to employ such a test must be to examine the process of randomization itself. Thus a significant result should lead to the decision that the treatment groups have not been randomized…”

– Senn (1994,  p. 1716)

Senn, S. (1994). Testing for baseline balance in clinical trials. Statistics in Medicine, 13, 1715–1726.

Parametric versus non-parametric statistics

There is no such thing as parametric or non-parametric data. There are parametric and non-parametric statistical models.

“The term nonparametric may have some historical significance and meaning for theoretical statisticians, but it only serves to confuse applied statisticians.”

– Noether, G. E. (1984, p. 177)

“. . . the distribution functions of the various stochastic variables which enter into their problems are assumed to be of known functional form, and the theories of estimation and of testing hypotheses are theories of estimation of and of testing hypotheses about, one or more parameters, finite in number, the knowledge of which would completely determine the various distribution functions involved. We shall refer to this situation for brevity as the parametric case, and denote the opposite situation, where the functional forms of the distributions are unknown, as the non-parametric case.”

– Wolfowitz, J. (1942, p. 264)


Noether, G. E. (1984). Nonparametrics: The early years—impressions and recollections. American Statistician, 38(3), 173–178.

Wolfowitz, J. (1942). Additive Partition Functions and a Class of Statistical Hypotheses. The Annals of Mathematical Statistics, 13(3), 247–279.

ACME: average causal mediation effect

Suppose there are two groups in a study: treatment and control. There are two potential outcomes for an individual, \(i\): outcome under treatment, \(Y_i(1)\), and outcome under control, \(Y_i(0)\). Only one of the two potential outcomes can be realised and observed as \(Y_i\).

The treatment effect for an individual is defined as the difference in potential outcomes for that individual:

\(\mathit{TE}_i = Y_i(1) – Y_i(0)\).

Since we cannot observe both potential outcomes for any individual, we usually we make do with a sample or population average treatment effect (SATE and PATE). Although these are unobservable (they are the averages of unobservable differences in potential outcomes), they can be estimated. For example, with random treatment assignment, the difference in observed sample mean outcomes for the treatment and control is an unbiased estimator of SATE. If we also have a random sample from the population of interest, then this difference in sample means gives us an unbiased estimate of PATE.

Okay, so what happens if we add a mediator? The potential outcome is expanded to depend on both treatment group and mediator value.

Let \(Y_i(t, m)\) denote the potential outcome for \(i\) under treatment \(t\) and with mediator value \(m\).

Let \(M_i(t)\) denote the potential value of the mediator under treatment \(t\).

The (total) treatment effect is now:

\(\mathit{TE}_i = Y_i(1, M_i(1)) – Y_i(0, M_i(0))\).

Informally, the idea here is that we calculate the potential outcome under treatment, with the mediator value as it is under treatment, and subtract from that the potential outcome under control with the mediator value as it is under control.

The causal mediation effect (CME) is what we get when we hold the treatment assignment constant, but work out the difference in potential outcomes when the mediators are set to values they have under treatment and control:

\(\mathit{CME}_i(t) = Y_i(t, M_i(1)) – Y_i(t, M_i(0))\)

The direct effect (DE) holds the mediator constant and varies treatment:

\(\mathit{DE}_i(t) = Y_i(1, M_i(t)) – Y_i(0, M_i(t))\)

Note how both CME and DE depend on the treatment group. If there is no interaction between treatment and mediator, then

\(\mathit{CME}_i(0) = \mathit{CME}_i(1) = \mathit{CME}\)


\(\mathit{DE}_i(0) = \mathit{DE}_i(1) = \mathit{DE}\).

ACME and ADE are the averages of these effects. Again, since they are defined in terms of potential values (of outcome and mediator), they cannot be directly observed, but – given some assumptions – there are estimators.

Baron and Kenny (1986) provide an estimator in terms of regression equations. I’ll focus on two of their steps and assume there is no need to adjust for any covariates. I’ll also assume that there is no interaction between treatment and moderator.

First, regress the mediator (\(m\)) on the binary treatment indicator (\(t\)):

\(m = \alpha_1 + \beta_1 t\).

The slope \(\beta_1\) tells us how much the mediator changes between the two treatment conditions on average.

Second, regress the outcome (\(y\)) on both mediator and treatment indicator:

\(y = \alpha_2 + \beta_2 t + \beta_3 m\).

The slope \(\beta_2\) provides the average direct effect (ADE), since this model holds the mediator constant (note how this mirrors the definition of DE in terms of potential outcomes).

Now to work out the average causal mediation effect (ACME), we need to wiggle the outcome by however much the mediator moves between treat and control, whilst holding the treatment group constant. Slope \(\beta_1\) tells us how much the treatment shifts the mediator. Slope \(\beta_3\) tells us how much the outcome increases for every unit increase in the mediator, holding treatment constant. So \(\beta_1 \beta_3\) is ACME.

For more, especially on complicating the Baron and Kenny approach, see Imai et al. (2010).


Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182.

Imai, K., Keele, L., & Yamamoto, T. (2010). Identification, Inference and Sensitivity Analysis for Causal Mediation Effects. Statistical Science, 25, 51–71.


Assumptions not often assessed or satisfied in published mediation analyses in psychology and psychiatry

Elizabeth Stuart et al. (2021) reviewed 206 articles using mediation analysis “in top academic psychiatry or psychology journals” from 2013-2018, to determine how many satisfied assumptions of mediation analysis.

Here are the headline results (% of papers):

(The assumption of no interaction of exposure and mediator is as a percentage of the 97% of studies that used the Baron and Kenny approach.)

Although 42% of studies discussed mediation assumptions, “in most cases this discussion was simply an acknowledgement that the data were cross sectional and thus results should be interpreted with caution.”


Stuart, E. A., Schmid, I., Nguyen, T., Sarker, E., Pittman, A., Benke, K., Rudolph, K., Badillo-Goicoechea, E., & Leoutsakos, J.-M. (2021). Assumptions not often assessed or satisfied in published mediation analyses in psychology and psychiatry. Epidemiologic Reviews, In Press.