NHS England mental health clustering implementation “disappointing”

A document is circulating from NHS England and NHS Improvement (13 Aug 2018) on the current state of payment systems and clustering in mental health services in England.

It cites “local pricing rule 7” from the 2017/18 and 2018/19 National Tariff Payment System (NTPS) and reports on a survey of progress towards implementing the rule.

Here is what rule 7 said (p. 114):

Rule 7: Local prices for mental health services for working age adults and older people
a. Providers and commissioners must link prices for mental health services for working age adults and older people to locally agreed quality and outcome measures and the delivery of access and wait standards.
b. Providers and commissioners must adopt one of the following payment approaches in relation to mental health services for working age adults and older people:

i. episode of care based on care cluster currencies
ii. capitation, having regard to the care cluster currencies and any other relevant information, in accordance with the requirements of Rule 4(b) to (e)
iii. an alternative payment approach agreed in accordance with the
requirements of Rule 4 (b) to (e).

Commissioners and providers (233 in total) were asked, “What payment approach do you have in place with your contracts for working age adults and older people in 2017/18?”

Here are the results:

So only 14 out of 223 responses (6%) reported a move away from block contracts – the whole point of the new payment systems! The report notes, “The results were disappointing.”

Reasons given by respondents for the poor implementation included:

  • “limited local capacity to implement a new payment approach”
  • “lack of shared confidence in cost and activity data”
  • “uncertainty about how the proposed payment approaches would relate to the new operating models that would develop as part of integrated care systems.”

Services are supposed to be “clustering” the patients they see, irrespective of whether the clusters are used for payment. Rule 6 (p. 114):

Rule 6: Using the mental healthcare clusters
All providers of services covered by the care cluster currencies (see Annex B3) must record and submit the cluster data to NHS Digital as part of the Mental Health Services Dataset, whether or not they have used the care clusters as the basis of payment. This should be completed in line with the mental health clustering tool (Annex B3) and mental health clustering booklet to assign a care cluster classification to patients.

The research on clusters is damning. A recent study (Jacobs, et al., 2018) found that clusters were not very good at characterising the costs of different kinds of treatment and support (p. 7):

“Clusters are therefore not performing very well as a classification system to capture similarities and differences between patients. The categories of the current classification system appear to be neither case-mix nor resource homogeneous. We find evidence of large variation in terms of activity and costs within clusters and between providers.”

Surprisingly, the authors argue that clustering should continue (p. 7):

“… any payment approach needs to be underpinned by a solid classification system and to abandon the clustering approach now will thwart all progress. The clustering approach is already relatively well-established among most providers. Scrapping it all and starting from scratch risks putting mental health services back a decade in terms of developing a more transparent and fair funding system.”

Given the survey results above, it’s unclear how much progress would actually be thwarted by ditching clusters.

 

If you enjoy this sort of thing, you might also be interested in:

Mental health funding FOI responses update

I asked Treasury:

Blame for insufficient mental healthcare funding has been passed around between Department of Health, NHS England, and individual Clinical Commissioning Groups (CCGs), however, the source of funding is the Treasury. Although CCGs and other mediating organisations make decisions about how much funding mental health receives, this is as a proportion of budgets decided at Treasury level. Any budgetary planning at Treasury level must therefore take mental health into consideration, alongside other areas of healthcare.

I am writing to request:

(i) names of individuals at Treasury and above, including advisors by official name or function, who are responsible for decisions made in relation to mental health care budgets;

(ii) documentation on budgetary decisions made, including evidence of how, in calculating the total health budget, mental health needs have been taken into consideration.

To (i) they said they don’t hold the information. To (ii) they said they do, but wouldn’t share it, citing Section 35 of the FOI act.

(Full response here.)

I asked the Department of Health:

CCGs and other mediating organisations make decisions about how much funding mental health receives, but this is as a proportion of budgets decided at Treasury level. Any budgetary planning at Treasury level must therefore take mental health into consideration, alongside other areas of healthcare.

I am writing to inquire about advice provided by Department of Health to Treasury on mental health budgets.

1. Who in DH provides this advice?

2. What advice has been provided to inform the most recent budget allocation for health?

They also confirmed that they held relevant information but refused to share it, citing s35(1)(a).

(Full response here.)

I asked NHS England the same question:

[…] I am writing to inquire about advice provided by NHS England to Treasury on mental health budgets.

1. Who in NHSE provides this advice?

2. What advice has been provided to inform the most recent budget allocation for health?

They provided a response.

1. Who in NHSE provides this advice?

Paul Baumann, Chief Financial Officer for NHS England, has responsibility for the organisation’s budgets including providing advice on these budgets. NHS England is an Arm’s Length Body (ALB) of the Department of Health (DH), much of the advice the Treasury would receive on Mental Health would be coordinated by the Department.

2. What advice has been provided to inform the most recent budget allocation for health?

NHS England’s view of the overall funding requirements of the NHS were set out in financial analysis conducted for the Call to Action (July 2013) [see, especially, the technical annex] and the Five Year Forward View (October 2014), which have been shared with DH and Her Majesty’s Treasury.

This analysis projects “do-nothing” expenditure using assumptions about the three main drivers associated with current health care demand and costs: demographic growth, non-demographic growth (e.g. technological development and medical advances) and health cost inflation. Historic trends for these drivers were reviewed and an estimation of future pressures developed for six service level ‘assumption sets’: Acute, Mental Health, Specialised Services, Primary Care, Prescribing and non-activity based costs. This high level analysis thus includes assumptions related to cost and demand growth for mental health services as part of the overall modelling.

Detailed analysis and costing is completed by NHS England on specific initiatives, the output of these models are used to inform budget announcements and the planning guidance information. These costings are developed by the Medical Directorate and Finance Directorate working together.

(Link to response here.)

An argument against payment-by-outcomes for mental health

I have just seen a report on Payment by Results (PbR) for the adult Improving Access to Psychological Therapies (IAPT) programme and have concerns about the approach. The conclusion of the summary is that “the system appears feasible and the currency appears to be fit for purpose” which seems to suggest that the approach is going ahead.

This IAPT PbR proposal is outcomes based, so that the more improvement shown by service users, as partly determined by patient-reported outcome measures (PROMs), the more money service providers would receive. This is a worry as there is evidence that linking measures to targets has a tendency to cause the measures to stop measuring what it is hoped that they measure. For instance targets on ambulance response times have led to statistically unlikely spikes at exactly the target, suggesting times have been changed [1]. A national phonics screen has a statistically unlikely spike just at the cutoff score, suggesting that teachers have rounded marks up where they fell just below [2]. The effect has been around for such a long time that it has a name, Goodhart’s law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes” [3]. Faced with funding cuts, how many NHS managers will be forced to “game” performance-based payment systems to ensure their service survives?

PROMs have been criticised by therapists for leading to an “administratively created reality” [5] and being clinically unhelpful, perhaps even damaging. However, evidence is building that feeding back results from PROMs to clinicians is helpful for improving care [4]. It would be very sad indeed if this useful tool were destroyed by payment systems, just as many mental health practitioners — and more importantly, service users — are seeing the benefits. Linking outcomes algorithmically to finances at all seems to be a bad idea in general — it’s especially bad when PROMs are just beginning to be trusted in routine practice.

References

[1] G. Bevan and C. Hood, “What’s measured is what matters: targets and gaming in the English public health care system,” Public Adm., vol. 84, no. 3, pp. 517–538, 2006.

[2] L. Townley and D. Gotts, “Topic Note: 2012 Phonics Screening Check Research report,” 2013.

[3] C. A. E. Goodhart, “Monetary relationships: A view from Threadneedle Street.” 1975.

[4] C. Knaup, M. Koesters, D. Schoefer, T. Becker, and B. Puschner, “Effect of feedback of treatment outcome in specialist mental healthcare: meta-analysis.,” Br. J. Psychiatry, vol. 195, no. 1, pp. 15–22, Jul. 2009.

[5] J. McLeod, “An administratively created reality: Some problems with the use of self-report questionnaire measures of adjustment in counselling/psychotherapy outcome research,” Couns. Psychother. Res., vol. 1, no. 3, pp. 215–226, Dec. 2001.

What I think’s wrong with adult mental health Payment by Results (PbR)

(Usual disclaimer: these are my personal views, etc.)

Here’s a simple guide to PbR for some background.

In adult mental health in England there is a collection of “clusters” characterizing mental health service users who (it is hoped) have similar levels of need. These will eventually be linked to tariffs – price (which hopefully relates to cost) – and used by CCGs when they commission services. Key to the approach is a questionnaire which asks clinicians to rate problems (e.g., “Problems associated with hallucinations and delusions”) and their severity, and an algorithm mapping these to the clusters. Some more detail is available over there.

I’m not convinced by the approach. Here’s why:

  1. The model used to link score profiles to clusters has a large number of predictors (1,204) which means it is likely it is “overfitting”, i.e., any predictions made are unlikely to generalise beyond the sample on which it was developed. At its worst there are around 1.5 cases per predictor.
  2. There is evidence that clinicians disagree with the cluster predictions. Investigations around this have seemingly ignored the fact that there is additional information in cluster descriptions such as an ICD-10 clinical diagnosis, for instance “Likely to include F60 Personality disorder”. This information is not part of the scores used as an input to the algorithm which assigns clusters. Without understanding how clinicians use this information it is not possible to improve the approach.
  3. The methodology used to validate the model is circular. Clinicians were trained in an algorithm to choose a cluster on the basis of clinician-completed questionnaire scores. They followed this process with service users in routine practice, first completing a questionnaire, then recording the cluster chosen. A statistical approach was used to model the relationship between scores and clusters chosen. The end result is a statistical model predicting what the clinicians were initially trained to do. The validation method used is to look at correlations between what clinicians (trained in an algorithm) do and what a computer (which relearned the algorithm) does. This is circular.
  4. The clusters are supposed to characterise patients with similar needs, e.g., in terms of duration and complexity of interventions. Is there any evidence that they do? Seemingly not but I hope I’m wrong. It’s clearly essential that clusters actually do this, since PbR is supposed to be used for commissioning services and deciding how much services get paid. It is crucial to look at variation in costs as well as averages.
  5. The questionnaire (“tool”) used for deciding how much services get paid has also been proposed as an outcomes measure. This is despite the fact that the proposed approach derived by a factor analysis is psychometrically poor. The proposers recognise this (page 30): “it has been well established within the literature that the HoNOS is not typically associated with a high level of internal consistency due to its original intended purpose of being a scale with independent items encompassing a variety of health related problems.” Their factor analysis confirms this. Goodhart’s law suggests that even if the psychometrics were fine, the measure would cease to measure what it claims to measure once linked to costs.

I don’t think it has to be like this but it all makes very depressing reading.