Blog

History repeating in psychedelics research

Interesting draft paper by Michiel van Elk and Eiko Fried on flawed evaluations of psychedelics to treat mental health conditions and how to do better. Neat 1966 quotation at the end:

‘… we urge caution repeating the history of so many hyped treatments in clinical psychology and psychiatry in the last century. For psychedelic research in particular, we are not the first ones to raise concerns and can only echo the warning expressed more than half a century ago:

“To be hopeful and optimistic about psychedelic drugs and their potential is one thing; to be messianic is another. Both the present and the future of psychedelic research already have been grievously injured by a messianism that is as unwarranted as it has proved undesirable”. (Masters & Houston, 1966)’

Technical recession

What’s now known as a “technical recession” has occurred if a country’s real GDP falls two quarters in a row. It was introduced in a 1974 NYT article by the then US Commissioner of Labor Statistics, Julius Shiskin.

The Office for Budget Responsibility (2023, p. 39) says that the UK “narrowly avoided a technical recession in the second half of 2022 as real GDP fell by 0.2 per cent in the third quarter, but was flat in the fourth quarter.”

Tin openers versus dials

Neil Carter (1989, p. 134) on the limits of data dashboards and mindless use of KPIs:

“… the majority of indicators are tin-openers rather than dials: by opening up a ‘can of worms’ they do not give answers but prompt interrogation and inquiry, and by themselves provide an incomplete and inaccurate picture.”

Carter, N. (1989). Performance indicators:‘Backseat driving’ or ‘hands off’control? Policy & Politics, 17(2), 131-138.

“Randomista mania”, by Thomas Aston

Thomas Aston provides a helpful summary of RCT critiques, particularly in international evaluations.

Waddington, Villar, and Valentine (2022), cited therein, provide a handy review of comparisons between RCT and quasi-experimental estimates of programme effect.

Aston also cites examples of unethical RCTs. One vivid example is an RCT in Nairobi with an arm that involved threatening to disconnect water and sanitation services if landlords didn’t settle debts.

Hypothesis testing for categorical predictors

Interesting update to {ggeffects}, by Daniel Lüdecke:

A reason to compute adjusted predictions (or estimated marginal means) is to help understanding the relationship between predictors and outcome of a regression model. In particular for more complex models, for example, complex interaction terms, it is often easier to understand the associations when looking at adjusted predictions instead of the raw table of regression coefficients.

The next step, which often follows this, is to see if there are statistically significant differences. These could be, for example, differences between groups, i.e. between the levels of categorical predictors or whether trends differ significantly from each other.

The ggeffects package provides a function, hypothesis_test(), which does exactly this: testing differences of adjusted predictions for statistical significance. This is usually called contrasts or (pairwise) comparisons. This vignette shows some examples how to use the hypothesis_test() function and how to test whether differences in predictions are statistically significant.

Read more.

Formal education and training

“[…] formal education and training rarely enhances competence. Instead, the so-called educational system mainly performs sociological functions, like controlling access to protected occupations and legitimising huge disparities in quality of life. These, in turn, have the effect of compelling most people, against their better judgement, to participate in the unethical activities of which modern society is so largely composed – the manufacture and marketing of junk foods, junk toys, junk education and junk research.”

– John Raven (2003, p. 360)

References

John, R. (2003). CPD – What should we be developing? The Psychologist, 16(7), 360–362.

Five questions to ask of social research

  1. Why should I care about this sample? Is the sample itself of interest, whether 1 person (e.g., a biography-like case study) or 1,000?
  2. If generalisation to a broader population is intended or implied,
    (a) How is the case made that the findings in the sample transfer to other people?
    (b) Why should I care about the target population?
  3. To what extent do findings depend on participants being able to articulate the reasons why they acted the way they did?
  4. Do the researchers state or imply that X caused Y or contributed to Y? If so, what evidence is provided that if X hadn’t been the case, then Y would have been different.
  5. What political agendas do (a) the researchers and (b) their institutions have? Related, what constraints are they under, e.g., due to who funds them?

Schopenhauer on religion

“According to this doctrine, then, God created out of nothing a weak race prone to sin, in order to give them over to endless torment. And, as a last characteristic, we are told that this God, who prescribes forbearance and forgiveness of every fault, exercises none himself, but does the exact opposite; for a punishment which comes at the end of all things, when the world is over and done with, cannot have for its object either to improve or deter, and is therefore pure vengeance. So that, on this view, the whole race is actually destined to eternal torture and damnation, and created expressly for this end, the only exception being those few persons who are rescued by election of grace, from what motive one does not know.

“Putting these aside, it looks as if the Blessed Lord had created the world for the benefit of the devil! it would have been so much better not to have made it at all.”

Arthur Schopenhauer (1788—1860), The Christian System.

 

Myers-Briggs

People are rightly critical of the Myers–Briggs Type Indicator (MBTI). But some of the types are moderately correlated with the Big Five dimensions, which are seen as more credible in differential psychology. MBTI extraversion correlates with… wait for it… Big Five extraversion (50% shared variance). MBTI intuition correlates with openness to new experiences (40% shared variance). The opposite poles correlate as you’d expect.

Here are the key correlations (Furnham et al., 2003, p. 580, gender and linear effects of age have been partialed out):

“Neuroticism was most highly correlated with MBTI Extraversion (r = -.30, p = .001) and Introversion (r = .31, p < .001). Costa and McCrae’s Extraversion was most highly correlated with Myers-Briggs Extraversion (r = .71, p < .001) and Introversion (r=-.72, p < .001). Openness was most highly correlated with Sensing (r = -.66, p < .001) and Intuition (r = .64, p < .001). Agreeableness was most highly correlated with Thinking (r=-41, p < .001) and Feeling (r = .28, p < .001). Conscientiousness was most highly correlated with Judgment (r = .46, p<.001) and Perception (r=-.46, p < .001).”

Dichotomising is still silly, particularly for scores close to thresholds, where a light breeze might flip someone’s type from, say, I to E or vice verse. But the same can be said of any discretisation taken too seriously. Consider also clinical bands on mental health questionnaires and attachment styles on the Experience in Close Relationships Scale.

Also silly are tautologous non-explanations of the form: they behave that way because they’re E. Someone is E because they ticked a bunch of boxes saying they consider themselves extraverted! The types are defined transparently in terms of thoughts, feelings, and behaviour. They help structure self-report, but don’t explain why people are the way they are. Explanations require mechanisms.

References

Furnham, A., Moutafi, J., & Crump, J. (2003). The relationship between the revised NEO-Personality Inventory and the Myers-Briggs Type Indicator. Social Behavior and Personality, 31, 577–584.