If we try to eliminate pay gaps by monitoring only single characteristics like gender or ethnicity, we can still end up with pay gaps between combinations of characteristics. One way to do this would be to appoint white women and Black men to senior management positions, but not appoint any Black women.
The idea of an intersection comes from set theory and describes where two sets overlap. For instance, the intersection of the set of Black people and the set of women is the set of Black women.
Intersectionality is a broad framework that promotes the study and elimination of oppression and exploitation of people in terms of combinations of characteristics.
Is intersectionality a theory, explaining why this form of discrimination occurs? Here’s Patricia Hill Collins (2019, p.51), a leading scholar in this area:
“Every time I encounter an article that identifies intersectionality as a social theory, I wonder what conception of social theory the author has in mind. I don’t assume that intersectionality is already a social theory. Instead, I think a case can be made that intersectionality is a social theory in the making.”
Collins, P. H. (2019). Intersectionality As Critical Social Theory. Duke University Press.
It is a cliché that randomised controlled trials (RCTs) are the gold standard if you want to evaluate a social policy or intervention and quasi-experimental designs (QEDs) are presumably the silver standard. But often it is not possible to use either, especially for complex policies. Theory-Based Evaluation is an alternative that has been around for a few decades, but what exactly is it?
In this post I will sketch out what some key texts say about Theory-Based Evaluation; explore one approach, contribution analysis; and conclude with discussion of an approach to assessing evidence in contribution analyses (and a range of other approaches) using Bayes’ rule.
For what it’s worth, I also propose dropping the category of “Theory-Based Evaluation”, but that’s a longer-term project…
Let’s get the obvious out of the way. All research, evaluation included, is “theory-based” by necessity, even if an RCT is involved. Outcome measures and interviews alone cannot tell us what is going on; some sort of theory (or story, account, narrative, …) – however flimsy or implicit – is needed to design an evaluation and interpret what the data means.
If you are evaluating a psychological therapy, then you probably assume that attending sessions exposes therapy clients to something that is likely to be helpful. You might make assumptions about the importance of the therapeutic relationship to clients’ openness, of any homework activities carried out between sessions, etc. RCTs can include statistical mediation tests to determine whether the various things that happen in therapy actually explain any difference in outcome between a therapy and comparison group (e.g., Freeman et al., 2015).
It is great if a theory makes accurate predictions, but theories are underdetermined by evidence, so this cannot be the only criterion for preferring one theory’s explanation over another (Stanford, 2017) – again, even if you have an effect size from an RCT. Lots of theories will be compatible with any RCT’s results. To see this, try a particular social science RCT and think hard about what might be going on in the intervention group beyond what the intervention developers have explicitly intended.
To accuracy, Kuhn (1977) suggests that a good theory should be consistent with itself and other relevant theories; have broad scope; bring “order to phenomena that in its absence would be individually isolated”; and it should produce novel predictions beyond current observations. There are no obvious formal tests for these properties, especially where theories are expressed in ordinary language and box-and-arrow diagrams.
Theory-Based Evaluation (title case)
Theory-Based Evaluation is a particular genre of evaluation that includes realist evaluation and contribution analysis. According the UK’s government’s Magenta Book (HM Treasury, 2020, p. 43), Theory-Based methods of evaluation
“can be used to investigate net impacts by exploring the causal chains thought to bring about change by an intervention. However, they do not provide precise estimates of effect sizes.”
The Magenta Book acknowledges (p. 43) that “All evaluation methods can be considered and used as part of a [Theory-Based] approach”; however, Figure 3.1 (p. 47) is clear. If you can “compare groups affected and not affected by the intervention”, you should go for experiments or quasi-experiments; otherwise, Theory-Based methods are required.
Theory-Based Evaluation attempts to draw causal conclusions about a programme’s effectiveness in the absence of any comparison group. If a quasi-experimental design (QED) or randomised controlled trial (RCT) were added to an evaluation, it would cease to be Theory-Based Evaluation, as the title case term is used.
Example: Contribution analysis
Contribution analysis is an approach to Theory-Based Evaluation developed by JohnMayne (28 November 1943 – 18 December 2020). Mayne was originally concerned with how to use monitoring data to decide whether social programmes actually worked when quasi-experimental approaches were not feasible (Mayne, 2001), but the approach evolved to have broader scope.
According to a recent summary (Mayne, 2019), contribution analysis consists of six steps (and an optional loop):
Step 1: Set out the specific cause-effect questions to be addressed.
Step 2: Develop robust theories of change for the intervention and its pathways.
Step 3: Gather the existing evidence on the components of the theory of change model of causality: (i) the results achieved and (ii) the causal link assumptions realized.
Step 4: Assemble and assess the resulting contribution claim, and the challenges to it.
Step 5: Seek out additional evidence to strengthen the contribution claim.
Step 6: Revise and strengthen the contribution claim.
Step 7: Return to Step 4 if necessary.
Here is a diagrammatic depiction of the kind of theory of change (or Theory of Change?) that could be plugged in at Step 2 (Mayne, 2015, p. 132), which illustrates the cause-effect links an evaluation would aim to evaluate. (Note the heteronormative and marital assumptions.)
In this example, mothers are thought to learn from training sessions and materials, which then persuades them to adopt new feeding practices. This leads to children having more nutritious diets. The theory is surrounded by various contextual factors such as food prices. (See also Mayne, 2017, for a version of this that includes ideas from the COM-B model of behaviour.)
Step 4 requires analysts to “Assemble and assess the resulting contribution claim”. How are we to carry out that assessment? Mayne (2001, p. 14) suggests some questions to ask:
“How credible is the story? Do reasonable people agree with the story? Does the pattern of results observed validate the results chain? Where are the main weaknesses in the story?”
For me, the most credible stories would include experimental or quasi-experimental tests, with mediation analysis of key hypothesised mechanisms, and qualitative detective work to get a sense of what’s going on beyond the statistical associations. But the quant part of that would lift us out of the Theory-Based Evaluation wing of the Magenta Book flowchart. In general, plausibility will be determined outside contribution analysis in, e.g., quality criteria for whatever methods for data collection and analysis were used.
Although contribution analysis is intended to fill a gap where no comparison group is available, Mayne (2001, p. 18) suggests that further data might be collected to help rule out alternative explanations of outcomes, e.g., from surveys, field visits, or focus groups. He also suggests reviewing relevant meta-analyses, which could (I presume) include QED and RCT evidence.
It is not clear to me what the underlying theory of causation is in contribution analysis. It is clear what it is not (Mayne, 2019, pp. 173–4):
“In many situations a counterfactual perspective on causality—which is the traditional evaluation perspective—is unlikely to be useful; experimental designs are often neither feasible nor practical…”
“[Contribution analysis] uses a stepwise (generative) not a counterfactual approach to causality.”
(We will explore counterfactuals below.) I can guess what this generative approach could be, but Mayne does not provide precise definitions. It clearly isn’t the idea from generative social science in which collections of computational “agents”, representing individual people, are simulated to model how (macro-level) social phenomena emerge from (micro-level) interactions between people (Epstein, 1999).
One way to think about it might be in terms of mechanisms: “entities and activities organized in such a way that they are responsible for the phenomenon” (Illari & Williamson, 2011, p. 120). We could make this precise by modelling the mechanisms using causal Bayesian networks such that variables (nodes in a network) represent the probability of activities occurring, conditional on temporally earlier activities having occurred – basically, a chain of probabilistic if-thens.
Why do people get vaccinated for Covid-19? Here is the beginning of a (generative?) if-then theory:
If you learned about vaccines in school and believed what you learned and are exposed to an advert for Covid-19 jab and are invited by text message to book an appointment for one, then (with a certain probability) you use your phone to book an appointment.
If you have booked an appointment, then (with a certain probability) you travel to the vaccine centre in time to attend the appointment.
If you attend the appointment, then (with a certain probability) you are asked to join a queue.
… and so on …
In a picture:
This does not explain how or why the various entities (people, phones, etc.) and activities (doing stuff like getting the bus as a result of beliefs and desires) are organised as they are, just the temporal order in which they are organised and dependencies between them. Maybe this suffices. “Explanations come to an end somewhere…”
What are counterfactual approaches?
Counterfactual impact evaluation usually refers to quantitative approaches to estimate average differences as understood in a potential outcomes framework (or generalisations thereof). The key counterfactual is something like:
“If the beneficiaries had not taken part in programme activities, then they would not have had the outcomes they realised.”
Logicians have long worried how to determine the truth of counterfactuals, “if A had been true, B.” One approach, due to Stalnaker (1968), proposes that you:
Start with a model representing your beliefs about the factual situation where A is false. This model must have enough structure so that tweaking it could lead to different conclusions (causal Bayesian networks have been proposed; Pearl, 2013).
Add A to your belief model.
Modify the belief model in a minimal way to remove contradictions introduced by adding A.
Determine the truth of B in that revised belief model.
This broader conception of counterfactual seems compatible with any kind of evaluation, contribution analysis included. White (2010, p. 157) offered a helpful intervention, using the example of a pre-post design where the same outcome measure is used before and after an intervention:
“… having no comparison group is not the same as having no counterfactual. There is a very simple counterfactual: what would [the outcomes] have been in the absence of the intervention? The counterfactualis that it would have remained […] the same as before the intervention.”
The counterfactual is untested and could be false – regression to the mean would scupper it in many cases. But it can be stated and used in an evaluation. I think Stalnaker’s approach is a handy mental trick for thinking through the implications of evidence and producing alternative explanations.
Cook (2000) offers seven reasons why Theory-Based Evaluation cannot “provide the valid conclusions about a program’s causal effects that have been promised.” I think from those seven, two are key: (i) it is usually too difficult to produce a theory of change that is comprehensive enough for the task and (ii) the counterfactual remains theoretical – in the arm-chair, untested sense of theoretical – so it is too difficult to judge what would have happened in the absence of the programme being evaluated. Instead, Cook proposes including more theory in comparison group evaluations.
Bayesian contribution tracing
Contribution analysis has been supplemented with a Bayesian variant of process tracing (Befani & Mayne, 2014; Befani & Stedman-Bryce, 2017; see also Fairfield & Charman, 2017, for a clear introduction to Bayesian process tracing more generally).
The idea is that you produce (often subjective) probabilities of observing particular (usually qualitative) evidence under your hypothesised causal mechanism and under one or more alternative hypotheses. These probabilities and prior probabilities for your competing hypotheses can then be plugged into Bayes’ rule when evidence is observed.
Suppose you have two competing hypotheses: a particular programme led to change versus pre-existing systems. You may begin by assigning them equal probability, 0.5 and 0.5. If relevant evidence is observed, then Bayes’ rule will shift the probabilities so that one becomes more probable than the other.
Process tracers often cite Van Evera’s (1997) tests such as the hoop test and smoking gun. I find definitions of these challenging to remember so one thing I like about the Bayesian approach is that you can think instead of specificity and sensitivity of evidence, by analogy with (e.g., medical) diagnostic tests. A good test of a causal mechanism is sensitive, in the sense that there is a high probability of observing the relevant evidence if your causal theory is accurate. A good test is also specific, meaning that the evidence is unlikely to be observed if any alternative theory is true. See below for a table (lighted edited from Befani & Mayne, 2014, p. 24) showing the conditional probabilities of evidence for each of Van Evera’s tests given a hypothesis and alternative explanation.
Van Evera test
if Eᵢ is observed
P(Eᵢ | Hyp)
P(Eᵢ | Alt)
Fails hoop test
Passes smoking gun
Let’s take the hoop test. This applies to evidence which is unlikely if your preferred hypothesis were true. So if you observe that evidence, the hoop test fails. The test is agnostic about the probability under the alternative hypothesis. Straw-in-the-wind is hopeless for distinguishing between your two hypotheses, but could suggest that neither holds if the test fails. The double-decisive test has high sensitivity and high specificity, so provides strong evidence for your hypothesis if it passes.
The arithmetic is straightforward if you stick to discrete multinomial variables and use software for conditional independence networks. Eliciting the subjective probabilities for each source of evidence, conditional on each hypothesis, may be less straightforward.
I am with Cook (2000) and others who favour a broader conception of “theory-based” and suggest that better theories should be tested in quantitative comparison studies. However, it is clear that it is not always possible to find a comparison group – colleagues and I have had to make do without (e.g., Fugard et al., 2015). Using Theory-Based Evaluation in practice reminds me of jury service: a team are guided through thick folders of evidence, revisiting several key sections that are particularly relevant, and work hard to reach the best conclusion they can with what they know. There is no convenient effect size to consult, just a shared (to some extent) and informal idea of what intuitively feels more or less plausible (and lengthy discussion where there is disagreement). To my mind, when quantitative comparison approaches are not possible, Bayesian approaches to assessing qualitative evidence are the most compelling way to synthesise qualitative evidence of causal impact and make transparent how this synthesis was done.
Finally, it seems to me that the Theory-Based Evaluation category is poorly named. Better might be, Assumption-Based Counterfactual approaches. Then RCTs and QEDs are Comparison-Group Counterfactual approaches. Both are types of theory-based evaluation and both use counterfactuals; it’s just that approaches using comparison groups gather quantitative evidence to test the counterfactual. However, the term doesn’t quite work since RCTs and QEDs rely on assumptions too… Further theorising needed.
“There’s something incredibly powerful – revolutionary, even – about challenging someone’s understanding of gender with your very existence.”
According to dominant ideas in “the West”, your gender ultimately reduces to whether you have XX or XY chromosomes, as inferred by inspecting your genitals at birth, and there are only two possibilities: woman or man. Yes, you will occasionally hear how sex is biological and gender is social, but under the dominant norms, (specifically chromosomal) sex and gender categories are defined to align.
The existence of transgender (trans) people challenges this chromosomal definition, since their gender differs from male/female sex category assigned at birth. People whose gender is under the non-binary umbrella challenge the man/woman binary since they are neither, both, or fluctuate between the two.
It is tempting for researchers to ignore these complexities since most people are cisgender (cis for short), that is, their gender aligns with their sex category at birth, and they are either a woman or a man. As the male/female demographic tickboxes illustrate, many do ignore the complexity.
A few years ago, analytic philosophers, having for centuries pondered questions such as “what can be known?” and “is reality real?”, discovered that theorising gender offered intellectual challenges too and could be used to support human rights activism. Although plenty of writers have pondered gender, this corner of philosophy offers clear definitions, so is perhaps easier to understand and critique than other approaches. I think it is also more compatible with applied social research.
One of the politically-aware analytical philosophers who caught my eye, Robin Dembroff, recently published a paper analysing what it means to be genderqueer. Let’s sketch out how the analysis goes.
“… the gendeRevolution has begun, and we’re going to win.”
Genderqueer originally referred to all gender outliers – whether cis, trans, or other. Its meaning has shifted to overlap with non-binary gender and trans identities as per the Venn flags below.
Both genderqueer and non-binary have become umbrella terms with similar meaning; however, genderqueer carries a more radical connotation- especially since it includes the reclaimed slur “queer” – whereas non-binary is more neutral and descriptive, even appearing in HR departments’ IT systems.
The data on how many people are genderqueer thus far is poor – hopefully the 2021 census in England and Wales will improve matters. In the meantime, a 2015 UK convenience sample survey of non-binary people (broadly defined) found that 63% identified as non-binary, 45% as genderqueer, and 65% considered themselves to be trans. The frequency of combinations was not reported.
This year’s international (and also convenience sample) survey of people who are neither men nor women “always, solely and completely” found a small age effect: people over 30 were eight percentage points more likely to identify as genderqueer than younger people.
Externalist versus internalist
Dembroff opens with a critique of two broad categories of theories of what gender is: externalist (or social position) theories and internalist (or psychological identity) theories.
Externalist theories define gender in terms of how someone is perceived by others and advantaged or disadvantaged as a result. So, someone would be genderqueer if they are perceived and treated as neither a man nor a woman. However, this doesn’t work for genderqueer people, Dembroff argues, since they tend to reject the idea that particular gender expressions are necessary to be genderqueer; “we don’t owe you androgyny” is a well-known slogan. Also, many cis people do not present neatly as male or female – that does not mean they are genderqueer.
One of the internalist accounts Dembroff considers, by Katherine Jenkins, defines gender in terms of what gender norms someone feels are relevant to them – e.g., how they should dress, behave, what toilets they may use – regardless of whether they actually comply with (or actively resist) those norms. Norm relevancy requires that genderqueer people feel that neither male nor female norms are relevant. This is easiest to see with binary gendered toilets – neither the trouser nor skirt-logoed room is safe for a genderqueer person. However, it is unlikely that none of the norms would be felt as relevant. So the norm-relevancy account, Dembroff argues, would exclude many genderqueer people too.
Critical gender kinds
Dembroff’s proposed solution combines social and psychological understandings of gender. They introduce the idea of a critical gender kind and offer genderqueer as an example. A kind, in this sense, is roughly a collection of phenomena defined by one or more properties. (For a longer answer, try this on social kinds by Ásta.) Not to be confused with gender-critical feminism.
A gender is a critical gender kind, relative to a given society, if and only if people who are that gender “collectively destabilize one or more core elements of the dominant gender ideology in that society”. The genderqueer kind destabilises the binary assumption that there are only two genders. Dembroff emphasises the collective nature of genderqueer; as a kind it is not reducible to any individual’s characteristics and not every genderqueer person need successfully destabilise the binary norm. An uncritical gender kind is then one which perpetuates dominant norms such as the chromosomal and genital idea of gender outlined above.
Another key ingredient is the distinction between principled and existential destabilising – roughly, whether you are personally oppressed in a society with particular enforced norms. Someone who is happy to support and use all-gender toilets through (principled) solidarity with genderqueer people has a different experience to someone who is genderqueer and feels unsafe in a binary gendered toilet.
In summary, genderqueer people collectively and existentially destabilise the binary norm. Some of the many ways they do this include: using they/them or neopronouns, through gender expression that challenges dominant norms, asserting that they are genderqueer, challenging gender roles in sexual relationships, and switching between male and female coded spaces.
Although Dembroff challenges Jenkins’ norm-relevancy account, to me the general idea of tuning into gender norms is helpful for decoding your gender, and neatly complements Dembroff’s account. Maybe a trick is to add, and view as irrelevant, norms like “your genitals determine your gender” rather than only male and female norms. Additionally, adding probabilities rather than using binary true/false classical logic seems helpful to revise the account too. The externalist accounts are also relevant since they map out some ways that genderqueer people resist binary norms and dominant ways that (especially cis) people perceive and treat others.
“What if we took a more daring, modernist, defamiliarizing approach to writing theory? What if we asked of theory as a genre that it be as interesting, as strange, as poetically or narratively rich as we ask our other kinds of literature to be? What if we treated it not as high theory, with pretentions to legislate or interpret other genres, but as low theory, as something vulgar, common, even a bit rude—having no greater or lesser claim to speak of the world than any other? It might be more fun to read. It might tell us something strange about the world. It might, just might, enable us to act in the world otherwise. A world in which the old faith in History is no more, but where there are histories that still might be made—in a pinch.”
I had tried to avoid engaging in grand metaphysical “ism” talk, but it seems that resistance is futile! So here are brief thoughts, in the context of theorising gender.
We can safely assume that there is a reality to people’s gender-relevant experiences and biochemistry which exists independently of our understandings. Taking this (to me obvious) stance is known as ontological realism. Theorising, about gender or otherwise, is done by people who have imperfect and indirect access to reality and theories evolve over time. Our vantage point—beliefs, biases, values, experience, privilege and oppression—has an impact on our theories, so two gender theorists doing the best they can with the available evidence can produce very different explanations (epistemic relativism). This is true of any science where multiple theories are consistent with evidence; in other words, the theories are underdeterminedby evidence. It is also true when we theorise about ourselves and try to work out our own gender.
Even with this relativist mess, manifesting as bickering in scientific journals and conferences, consensus can arise and one theory can be declared better than another (judgemental rationality). However, there are often many different ways to classify biological, social, and other phenomena, even with impossibly perfect access to reality (this has a great name: promiscuous realism).
The underdetermination of theories means that something beyond evidence is needed to decide how and what to theorise. Scholars in the critical theory tradition are required to pick a side in a social movement, for instance feminism, anti-racism, trans rights, or an intersectional composition thereof. It is not enough for a critical theory to be empirically adequate; it also has to help chosen social struggles make progress towards achieving their aims. Two theories may be empirically indistinguishable but one transphobic; from a trans rights perspective, the transphobic theory should be discarded.
(For more on epistemic relativity, ontological realism, and judgemental rationality, see Archer et al. (2016).)
Now we can make sense of what it means to be assigned female or male at birth. What is assigned is a sex category. This is not arbitrary, but based on socially agreed and – for cisgender people – reliable biological criteria. However, those criteria could have been otherwise, for instance using a broader range of biological features and more than two categories. Also the supposedly biological male/female sex category quickly takes on a social role that is independent of genitals and operates even when they are hidden.
A court case (GM v Carmarthenshire County Council  EWFC 36) has ruled that a social worker’s “generalised statements, or tropes” based on attachment theory are not admissible evidence.
The full judgement by Mr Justice Mostyn has interesting thoughts on the valid application of theory and balance between theory and observation.
“… the local authority’s evidence in opposition to the mother’s application was contained in an extremely long, 44-page, witness statement made by the social worker […]. This witness statement was very long on rhetoric and generalised criticism but very short indeed on any concrete examples of where and how the mother’s parenting had been deficient. Indeed, it was very hard to pin down within the swathes of text what exactly was being said against the mother. […] [The social worker] was asked to identify her best example of the mother failing to meet L’s emotional needs. Her response was that until prompted by the local authority mother had not spent sufficient one-to-one time with L and had failed on one occasion to take him out for an ice cream. […] A further criticism in this vein was that the mother had failed to arrange for L’s hair to be cut in the way that he liked.”
There is also a detailed section on attachment theory:
“… the theory is only a theory. It might be regarded as a statement of the obvious, namely that primate infants develop attachments to familiar caregivers as a result of evolutionary pressures, since attachment behaviour would facilitate the infant’s survival in the face of dangers such as predation or exposure to the elements. Certainly, this was the view of John Bowlby, the psychologist, psychiatrist, and psychoanalyst and originator of the theory in the 1960s. It might be thought to be obvious that the better the quality of the care given by the primary caregiver the better the chance of the recipient of that care forming stable relationships later in life. However, it must also be recognised that some people who have received highly abusive care in childhood have developed into completely well-adjusted adults. Further, the central premise of the theory – that quality attachments depend on quality care from a primary caregiver – begins to fall down when you consider that plenty of children are brought up collectively (whether in a boarding school, a kibbutz or a village in Africa) and yet develop into perfectly normal and well-adjusted adults.”