A council in England was recently reprimanded for running an advertising campaign against begging. In a series of posters displayed throughout Nottingham, the city council claimed that “beggars aren’t what they seem”, that begging “funds the misuse of drugs” and that money given to beggars would go “down the drain” or “up in smoke”.
The UK Advertising Standard Authority (ASA) upheld complaints about Nottingham City Council’s campaign, saying that it reinforced negative stereotypes against vulnerable people, and portrayed all beggars as “disingenuous and undeserving” people who would use direct donations irresponsibly. The council was ordered not to display the ads in their current form again, and to avoid using potentially offensive material in the future.
But the council defended the campaign, arguing that the “hard-hitting” posters were necessary to “discourage members of the public from giving money to people who beg” on the basis that doing so would likely fund “life-threatening drug or alcohol addictions”. The posters encouraged people to donate money to local charities instead, using the hashtag #givesmart.
Similar appeals have been made by other charities and councils across the UK: the borough of Kensington and Chelsea attracted controversy over its own anti-begging messages, and Nottingham City Council was ordered to withdraw a similar campaign once before, in 2004, because it wasn’t backed up by evidence.
Although the council cited a blog post from a local charity in support of its claims, it’s clear that both the advertising watchdog and members of the public need to see more evidence that such campaigns prevent harm, rather than cause it.
So, how could local authorities avoid such a misstep in the future?
For one thing, if the aim is to prevent the harms of drug and alcohol addiction, the council could follow existing health recommendations. The National Institute for Health and Care Excellence (NICE) – the body providing advice on best-practice for health and social care in England – makes a range of recommendations for helping people with alcohol addiction, for example. This includes following an evidence-based treatment manual and charting each person’s progress to review the effectiveness of different treatments. For homeless people, it recommends residential care for up to three months – it says nothing about trying to limit the amount of money that people receive.
But perhaps the council is keen to curb begging for other reasons: because it wants to satisfy members of the general public who find it a nuisance – if this were the reason it would be deeply troubling. Or perhaps it has a rationale for how cutting money to people begging might somehow treat those who have drug and alcohol problems and not cause anyone harm.
In any case, the council needs to be transparent about its aims and the evidence it has about the potential impacts of such campaigns so that an informed debate is possible.
Evaluating the evidence
There are many factors to take into account when evaluating the benefits and detriments of an ad campaign like this one. For instance, it would be useful to know how much money is given to people begging, how many of those people have alcohol or drug problems and how many seek out, or are given, support by local charities.
We would also need some hypotheses; for example, that the campaign will cause donations to local charities to rise, or drug and alcohol difficulties to fall among people who beg. These could be tested by tracking donations, or conducting surveys with people who beg both before and after the intervention, while taking account of any other factors that might have led to change.
Of course, the outcomes of such research can vary greatly, depending on whose perspectives you include. For example, Camden and Islington councils once asked locals their views on diverted giving (donating to charity, rather than directly to people in need). While 36% were positive, only 2% of people who were actually begging thought it was a good idea.
Deciding who to include in studies is a perennial problem in social research, especially when evaluation reports present rich details of people’s lives. Nottingham Council included three brief case summaries in their reply to the ASA’s judgment. Here’s one of them:
A man and a woman, who had previously been the subject of a Criminal Anti-Social Behaviour Order (CRASBO), were not homeless but travelled in to the city centre to beg for cash to fund their drug and alcohol addictions. The man would act as a look-out for his partner while she begged in shop doorways.
It is unclear what criteria the council used to choose their examples, but other research offers a different perspective on what it’s like to beg. One study, conducted in Scotland in the late 1990s, reported on a range of difficult decisions that people had to make, for instance choosing between begging and crime:
My bru [social security] money ran out and I had nae money. I have got a criminal record, so the choice was go back tae being a criminal and dae crookin’ and that or dae beggin’ and no get the jail. I am sick of the jail and that, so I decided tae dae beggin’.
They also reported what it felt like to beg – it seems plausible that people begging in Nottingham will have similar experiences:
They just look down on you like you’re dirt … like there was one time this guy says ‘you’re homeless, you’re dirt, you don’t have to be there, get a job’
Complex social and behavioural questions such as this can easily result in a complicated web of causes and effects. But mathematical tools such as causal networks may help: these can be designed and analysed using special software, which enables researchers to visualise the relationships between different factors in a diagram.
Each of the circles and arrows has a mathematical meaning: researchers can constrain the networks using data collected from studies, or try out invented scenarios to explore the consequences of different policies before any is implemented. All evaluations of complex interventions will make assumptions and have limitations; these diagrams can be used to make those assumptions explicit, and sound out where more research is needed.
Of course, all this is just a brief sketch of the complexities involved. Given the information released so far, it is unclear how deeply the council considered the potential harms or benefits of this campaign. Perhaps using causal networks to explain how they thought it would work, and what adverse effects had been accounted for, would help to reassure the public. It’s vital that local authorities make use of research to understand the unintended impacts of policy – especially when it affects the most vulnerable people in society.