close

Public Health

50 Years of the Misuse of Drugs Act (1971)

freestocks-nss2eRzQwgw-unsplash

On 27 May, it is exactly fifty years since the Misuse of Drugs Act 1971 (MDA), the UK’s primary legislation for controlling drugs, received Royal Assent.

The Act arranged drugs into a three-tier classification system – A, B and C – with controls based on the perceived relative harm of different substances. Now the legislation is at the centre of a campaign by Transform Drug Policy who are calling for an overhaul of the law which the organisation considers having represented ‘50 years of failure’. 

One of the rationales behind the MDA was to consolidate the existing patchwork of legislation that had developed in the UK since the Pharmacy Act of 1868. This was the first time Parliament recognised a risk to the public from ‘poisoning’ and the 1868 Act distinguished between substances that were ‘toxic’ (poisons) and substances that were both ‘toxic’ and ‘addictive’ (‘dangerous drugs’). 

Some of these so-called ‘drugs of addiction’ were later subject to further controls under the Dangerous Drugs Act 1920 (DDA) which introduced prescriptions and criminalised unauthorised possession of opium, morphine, heroin and cocaine. 

Whilst this did represent a continuation of wartime drug control efforts it was also the result of a racist media-led panic around Chinese opium dens, as well as being a response to international moves toward uniformity on drug regulation. 

The DDA was later clarified by the Departmental Committee on Morphine and Heroin Addiction in their 1926 ‘Rolleston Report’. This formed an interpretation of the Act that became known as the ‘British System’, framing ‘drug addiction’ as a medical issue rather than a moral failing. 

By the 1950s, drugs were becoming increasingly connected in public consciousness with youth subculture and – especially in the tabloid press – black communities and the London jazz scene, stoking further moral panic. 

By 1958, the British Medical Journal observed that the regulations around drugs and poisons were already ‘rather complicated’.[1] This picture was complicated yet further by the 1961 UN Single Convention on Narcotic Drugs which laid out an international regime of drug control, ratified in the UK in 1964 by another Dangerous Drugs Act

Another committee was also formed under the Chairmanship of Lord Brain, ultimately leading to (yet another) Dangerous Drugs Act in 1967 which held onto the principles of the ‘British System’ but introduced new stipulations, such as requiring doctors to apply for a licence from the Home Office for certain prescriptions. 

During the 1960s, drugs continued to be associated in popular imagination with youth, with most attention by 1967 on the ‘Counterculture’ and ‘the hippies’, and in particular their use of cannabis and LSD. That same year, Mick Jagger’s country retreat in Redlands was raided by the drugs squad in a bust that was symbolic of a broader clash of ideologies.

The arrest and harsh sentencing of Jagger, Keith Richards and their friend Robert Fraser prompted William Rees-Mogg’s famous Times editorial ‘Who Breaks a Butterfly on a Wheel?’ on 1 July 1967. This became part of a wider public debate on drug use and on 16 July a ‘Legalise Pot’ rally took place in Hyde park, followed on 24 July by a full-page advert (paid for by Paul McCartney) in the Times calling for cannabis law reform.  

Imaginatively, the Government decided to convene another committee, this time under Baroness Wootton. Its report, published at the end of 1968, argued that whilst it did not think cannabis should be legalised, it should be made distinct in law from other illegal drugs. 

Finally in 1970, Home Secretary James Callaghan introduced a new Bill that was described during its passage through Parliament as an attempt to replace ‘…the present rigid and ramshackle collection of drug Acts by a single comprehensive measure’.[2] But the Bill was as ideological as it was pragmatic, and Callaghan himself had rejected the recommendations of Wootton.

The debates in both the Commons and the Lords indicate that not only did most Members of Parliament who spoke on the subject have little understanding of the complexities of drug use, but also that the theme of the ‘permissive society’ and its supposed excesses was central.

The Bill was approved in May 1971, given Royal Assent the same month and fully implemented after two more years. The Act also established the Advisory Council on the Misuse of Drugs (ACMD), tasked with keeping the drug situation in the UK under review. 

Successive governments have tended to accept the recommendations of the Council but there have been clashes, such as in 2009 when there was a total breakdown of relations when Professor David Nutt, then Chair of the Council, was sacked by Home Secretary Alan Johnson after Nutt had claimed – with substantial evidence – that MDMA and LSD were less dangerous than alcohol. 

For all of this, what has actually been the impact of the MDA? Well, as Simon Jenkins recently pointed out in a blog for the Guardian, 27,000 children and teenagers are now involved in ‘country lines’ drug gangs. Jenkins had previously described the MDA as a law that has done ‘less good and more harm’ than any other law on the statute book.

It is difficult to argue with this. Far from stemming recreational drug use, use of illegal drugs only increased after the MDA and became endemic in cities during the 1980s as heroin became a significant social issue. In 1979, the number of notified heroin users exceeded 1,000 for the first time. 

Over the 1980s and 1990s, drugs like MDMA were also increasingly used to enhance users’ experiences, especially in rave contexts, yet the Government line remained the same. As drug and harm reduction expert Julian Buchanan argued in 2000, ‘two decades of prevention, prohibition and punishment have had little noticeable impact upon the growing use of illegal drugs’.[3]

The MDA also deterred drug users from seeking help for fear of legal repercussions and limited the opportunities of countless young people. Last year, Adam Holland noted in the Harm Reduction Journal that in the UK, drug-related deaths were at the highest level on record and that although enormous time and money has gone into combating the illicit drugs trade, the market has not stopped growing.[4]

Writing thirty years after the MDA, Buchanan had argued that a ‘bold and radical rethink of UK drug policy’ was needed. Such a rethink never materialised. In 2019, the House of Commons Select Committee on Drug Policy concluded that ‘UK drugs policy is failing’. Now after half a century it might be time for real radical change, and the anniversary presents a great opportunity for this conversation to gain momentum. 

Hallam Roffey is a PhD Candidate in the Department of History at the University of Sheffield. His research looks at the idea of ‘acceptability’ in English culture between 1970 and 1990, examining changing attitudes around sexually explicit imagery, violent media, offensive speech and blasphemy. You can find Hallam on Twitter @HallamRoffey


[1] John Glaister and Edgar Rentoul, ‘The Control of the Sale of Poisons and Dangerous Drugs’, British Medical Journal (1958;2), p. 1525.

[2] House of Lords debate (October 1969), Hansard volume 790, cols 189-90.

[3] Julian Buchanan and L. Young, ‘The War on Drugs—A War on Drug Users’, Drugs: Education, Prevention, Policy 7:4 (2000), pp. 409-22.

[4] Adam Holland, ‘An ethical analysis of UK drug policy as an example of a criminal justice approach to drugs: a commentary on the short film Putting UK Drug Policy into Focus’, Harm Reduction Journal 17:97 (2020).

read more

‘Violent affections of the mind’: The Emotional Contours of Rabies

Rabies pic small

Living through the Covid-19 pandemic has more than drummed home the emotional dimensions of diseases. Grief, anger, sorrow, fear, and – sometimes – hope have been felt and expressed repeatedly over the last year, with discussions emerging on Covid-19’s impact on emotions and the affect of lockdown on mental health.

But emotions have long since stuck to diseases. Rabies – sometimes called hydrophobia – is a prime example.[i] In nineteenth-century Britain, France, and the United States, rabies stoked anxieties. Before the gradual and contested acceptance of germ theory at the end of the nineteenth century, some doctors believed that rabies had emotional causes.

For much of the nineteenth century, the theory that rabies generated spontaneously jostled with the one that held that it was spread through a poison or virus. The spontaneous generation theory stressed the communality of human and canine emotions. Rather than contagion through biting, emotional sensitivity made both species susceptible to the disease.

A sensitive person prone to emotional disturbances was considered particularly at risk from external influences that might cause rabies to appear. “Violent affections of the mind, operating suddenly and powerfully on the nervous system” could in rare cases lead to rabies or, at the very least, exacerbate the symptoms in nervous patients, according to Manchester physician Samuel Argent Bardsley (who was more commonly known for promoting quarantine as a way of containing the disease).

For one Lancashire man, John Lindsay, the difficulty of feeding his family drove him to anxiety and despair, exacerbated by a bout of overwork and a lack of food. Fatigued, suffering from headaches, and fearing liquids, Lindsay remembered being bitten by a supposed mad dog some twelve years previously. Amidst violent spasms, visions of the black dog “haunted his imagination with perpetual terrors” and made recovery seem “hopeless.” With reluctance, Bardsley concluded that this was a case of spontaneous rabies. Emotional distress and an overactive imagination had caused and aggravated the disease.

During the mid-nineteenth century prominent London doctors argued that rabies was closely linked to hysteria and had emotional and imaginative origins, much to the chagrin of veterinarian William Youatt, the leading opponent of theories of spontaneous generation.[ii] In the 1870s alienists (otherwise known as psychiatrists) then lent greater intellectual credibility to theories of rabies’ emotional aetiology. They stressed the powerful sway that emotions and the mind held over individuals, especially in the enervating conditions of modern life.

Physician and prominent British authority on mental disorders Daniel Hack Tuke argued that disturbing emotions and images could create hydrophobic symptoms in susceptible individuals. Referencing Bardsley, and drawing on French examples, he argued that “such cases illustrate the remarkable influence exerted upon the body by what is popularly understood as the Imagination.” The very act of being bitten by a dog and the “fearful anticipation of the disease” was enough to spark rabies , even if the dog was not rabid. Even rational and emotionally-hardy doctors had reported suffering from hydrophobic symptoms when recalling the appalling scenes of distress during the examination and treatment of hydrophobic patients.[iii] 

Tuke suggested that in some cases excitement or other forms of mental, emotional, and sensory overstimulation could activate the virus years after a bite from a rabid dog. He drew on a striking case from the United States, as reported by the Daily Telegraph in 1872. A farmer’s daughter had been bitten by a farm dog when choosing chickens for slaughter. The wound healed and no signs of rabies appeared until her wedding day two months later. The “mental excitement” of this life-changing event brought on a dread of water. After the ceremony she experienced spasms and “died in her husband’s arms.”

Tuke reproduced the newspaper’s view, and more generalized gendered assumptions about female emotional delicacy, that such “nervous excitement” had a profound influence on the “gentler” sex. In this case, her nerves were considered to have been exacerbated by the anticipation of the impending wedding night, which was often framed as an emotionally fraught sexual encounter.[iv]

Dr William Lauder Lindsay of the Murray Royal Asylum in Perth, Scotland, was another prominent proponent of the view that rabies was a predominately emotional disease. The disease, he argued, “is frequently, if not generally, the result of terror, ignorance, prejudice, or superstition, acting on a morbid imagination and a susceptible nervous temperament.” Under the sway of their overactive imagination, an individual could take on “canine proclivities,” such as barking and biting. In classist language, Lindsay argued that rabies showed the influence of mind over the body, especially in the “lower orders of the community.”[v]

The British alienists’ depiction of rabies as a predominately emotional disorder made its way across the Atlantic. In the mid-1870s Dr William A. Hammond, President of the New York Neurological Society and leading American authority on mental disorders, stated that the evidence from Europe suggested that heightened emotions might cause rabies in humans. More generally, New York physicians and neurologists debated whether or not individuals had died from actual rabies or fears of the disease, and discussed how fear might turn a bite from a healthy animal into death.[vi]

The alienists lent greater credibility to earlier theories that rabies anxieties could lead to imaginary or spurious rabies. Tuke asserted that fears of rabies could create an imaginary manifestation of the disease. “Hydrophobia-phobia” demonstrated clearly the “action of mind upon mind,” and was distinct from the “action of the mind upon the body” in those cases when emotional distress led to actual rabies.

Echoing Tuke, Lindsay identified women as a particular vector in triggering spurious rabies. He asserted that they spread rabies fears, as supposedly shown by an Irishwomen in Perth who had frightened her husband into believing he had rabies. For Lindsay, this was a classic case of spurious (or false) rabies, which required the rational and firm intervention of medical men, such as himself, to stamp out. But he felt himself fighting an unstoppable tide. For in America, as well as Britain, the press ignited fears and created spurious rabies in susceptible individuals.[vii]

Lindsay and Tuke believed that rabies could, in some cases, be transmitted by dogs to humans through biting and “morbid saliva.” But some doctors controversially argued that it was a purely emotional disease. Eminent Parisian doctor Édouard-François-Marie Bosquillon set the tone in 1802 when he confidently declared that rabies in humans was caused solely by terror. His observation that individuals were struck with hydrophobic symptoms, including “loss of reason” and convulsive movements,” at the sight of a mad dog provided sufficient proof.

Horror-inducing tales of rabies, fed to children from a young age, created fertile conditions for the development of the disease, particularly in “credulous, timid and melancholic” people. Gaspard Girard, Robert White, William Dick, and J.-G.-A. Faugére-Dubourg developed this line of argument as the century progressed. And the theory had traction. In the 1890s, Philadelphian neurologist Charles K. Mills insisted that rabies was purely a disease of the nerves. Such theories were, however, contentious, and Tuke cautioned against those who asserted that rabies was solely an imaginary disease.[viii]

Nonetheless, these theories cemented rabies as an emotionally-fraught disease and reinforced the dangers of dogs bites: even a bite from a healthy dog could trigger a lethal neurological reaction in the swelling ranks of anxious individuals. 

Dr Chris Pearson is Senior Lecturer in Twentieth Century History at the University of Liverpool. His next book Dogopolis: How Dogs and Humans made Modern London, New York, and Paris is forthcoming (2021) with University of Chicago Press. He runs the Sniffing the Past blog and you can download a free Android and Apple smart phone app on the history of dogs in London, New York, and Paris. You can find Chris on Twitter @SniffThePastDog.


Cover image: ‘Twenty four maladies and their remedies’. Coloured line block by F. Laguillermie and Rainaud, ca. 1880. Courtesy of the Wellcome Collection, https://wellcomecollection.org/works/pysjar4f/images?id=mpqquvrh [accessed 25 March 2021].

[i] Contemporaries sometimes used “rabies” and “hydrophobia” interchangeably to refer to the disease in animals and dogs, but sometimes used “rabies” to refer to the disease in dogs and “hydrophobia” for humans. With the rise of germ theory at the end of the nineteenth century, “rabies” gradually replaced “hydrophobia.” For simplicity’s sake, I will use “rabies” to refer to the disease in humans and animals unless I quote directly from a historical source.

[ii] Samuel Argent Bardsley, Medical Reports of Cases and Experiments with Observations Chiefly Derived from Hospital Practice: To which are Added an Enquiry into the Origin of Canine Madness and Thoughts on a Plan for its Extirpation from the British Isles (London: R Bickerstaff, 1807), 238-50, 284, 290; “Hydrophobia”, The Sixpenny Magazine, February 1866; Neil Pemberton and Michael Worboys, Rabies in Britain: Dogs, Disease and Culture, 1830-2000 (Basingstoke: Palgrave Macmillan, 2013 [2007]), 61-3.

[iii] Daniel Hack Tuke, Illustrations of the Influence of the Mind Upon the Body in Health and Disease Designed to Elucidate the Action of the Imagination (Philadelphia: Henry C. Lea, 1873), 198-99, 207.

[iv] Tuke, Illustrations,200-1; Daily Telegraph, 11 April 1872; Peter Cryle, “‘A Terrible Ordeal from Every Point of View’: (Not) Managing Female Sexuality on the Wedding Night,” Journal of the History of Sexuality 18, no. 1 (2009): 44-64.

[v] William Lauder Lindsay, Mind in the Lower Animals in Health and Disease, vol. 2 (London: Kegan Paul, 1879), 17; William Lauder Lindsay, “Madness in Animals,” Journal of Mental Science 17:78 (1871), 185; William Lauder Lindsay, “Spurious Hydrophobia in Man,” Journal of Mental Science 23: 104 (January 1878), 551-3; Pemberton and Worboys, Rabies, 96-7; Liz Gray, “Body, Mind and Madness: Pain in Animals in the Nineteenth-Century Comparative Psychology,” in Pain and Emotion in Modern History, ed. Rob Boddice (Basingstoke: Palgrave, 2014), 148-63.

[vi] “Hydrophobia: The Subject Discussed by Medical Men,” New York Times, 7 July 1874; Jessica Wang, Mad Dogs and Other New Yorkers: Rabies, Medicine, and Society in an American Metropolis, 1840-1920. (Baltimore: Johns Hopkins University Press, 2019), 150-1.

[vii] Tuke, Illustrations, 198-99; Lindsay, “Spurious Hydrophobia in Man,” 555-6, 558.

[viii] Lindsay, Mind in the Lower Animals, 176; Édouard-François-Marie Bosquillon, Mémoire sur les causes de l’hydrophobie, vulgairement connue sous le nom de rage, et sur les moyens d’anéantir cette maladie (Paris: Gabon, 1802), 2, 22, 26; Vincent di Marco, The Bearer of Crazed and Venomous Fangs: Popular Myths and Delusions Regarding the Bite of the Mad Dog (Bloomington: iUniverse, 2014), 141-47; Pemberton and Worboys, Rabies, 64; Tuke, Illustrations, 198-99; Wang, Mad Dogs, 151-2.

read more

Cheltine: The Diabetic Food That Wasn’t

Insulin

Cheltine: The Diabetic Food That Wasn’t

Today, the use of science in food and drink marketing is so commonplace that we take it as a given, but this was not the case in the nineteenth century. In Britain, the introduction of scientific discourse into advertising following the Great Exhibition of 1851 was innovative and novel, creating an artificial demand for products on the basis that trusting consumers thought that they would improve their lives. However, in many cases, these products were little more than placebos.

At the time, diabetes was frequently mentioned in the popular press, with many articles warning healthy people that they might be ‘unconscious victims’ if they suffered from ‘thirst, depression, lassitude, irritability without apparent cause and diminution of both physical and mental faculties’. 

Most physicians argued about the importance of diet in diabetes management, yet nobody could agree on the best solution. For some, high-carbohydrate diets were seen as most effective for diabetic patients, while for others, diets high in fat and animal protein had more positive outcomes. Others still advocated radical diets that promoted starvation (liquids-only diet) as an effective treatment.

It was this lack of consensus amongst medical professionals, coupled with a growing public interest in science, that led canny manufacturers to produce ‘diabetic foods’. From 1880 onwards, the market became saturated with diabetic water, bread, flour, biscuits, chocolate and even whiskey. On the whole, these foods were not especially adapted for diabetics; instead, they reflected a desire to tap into consumer health concerns for purely financial gain.

In Britain, the leading ‘diabetic’ food brand of the early 20th century was Cheltine, which was launched by the Worth’s Food Syndicate of Cheltenham in December 1899 after a competition to name a new range of diabetic foods. Cheltine was a brown granular powder that could be stirred into hot milk or water and, according to its product launch announcement, was ‘harmless, flesh-forming and palatable’. It came in an eye-catching red and yellow tin, which featured a mock coat of arms with images of trees to suggest that it was natural, pure and healthy.

Advertisement for Cheltine (1900), source: Smithsonian National Museum of American History, https://americanhistory.si.edu/collections/search/object/nmah_1756685 

Almost initially upon launch, Cheltine was called out by The Lancet (30 June 1900) for its misleading packaging, which stated that the food ‘cannot turn into sugar’. Cheltine responded instantly by claiming that this was merely a clerical error.

Nonetheless, just one week later, the brand was challenged once again by The Lancet (6 July 1900) for claiming in advertisements that its starch was so modified that it could not be converted into diabetic sugar during digestion. Dr Dixon expressed concern that not only was this claim false, but that it also spread the dangerous belief that starch was suitable for diabetic patients.

Cheltine immediately tried to counter these accusations by building a case for its credibility. It exhibited its products at commercial fairs, promoted its baker Mr N.J. Bloodworth as a ‘scientific baker’ and used testimonials from supposedly satisfied customers in its advertisements. However, these testimonials, which claimed that the diabetics were ‘perfectly cured within months’, only used initials, making it very difficult to trace their identities if indeed they were real at all. 

Cheltine’s bold claims were also propagated by a 1904 Industrial Gloucestershire publication, which described Cheltine as a ‘pioneer’ that had helped diabetics gain a complete recovery thanks to its ‘special peculiarities both in nature, proportion of ingredients and manner of treatment’. 

Again, doctors struck back. The British Medical Journal (10 March 1906; 21 May 1910) described these assertions as ‘startling’ and warned that Cheltine should not be recommended for diabetics, as its chemical composition was unsafe. Moreover, the product was ‘unpleasant to the taste’ with its ‘rancid almond flavour’ and appearance of ‘pulverised rusk or toast’, which made it wholeheartedly ‘unworthy of consideration’. This verdict was far removed from Cheltine’s own description in its advertisements as a ‘perfect cooked food’ that was ‘thoroughly pleasant to the taste’.

Despite the constant concerns of doctors, Cheltine’s diabetic food remained popular throughout the first two decades of the 20th century and advertisements were featured regularly in the local and national press. However, this came to an abrupt halt in the 1920s following the discovery of insulin by Dr Frederick Banting and Dr Charles Best. 

Insulin helped diabetics to better manage their condition, allowing for more flexibility in diet and extending life expectancy. Now, with a bona fide medical solution, diabetics began to rely increasingly on insulin and medical advice rather than on patent foods like Cheltine.

But Cheltine was not prepared to give up so easily. It quickly turned its attention instead to anaemia, marketing what it called a ‘medicated food’ that would ‘regenerate the blood, revive drooping strength and diminish languor’. Unsurprisingly, there was no scientific evidence to support this claim and there was little to distinguish the ‘anaemic’ food from Cheltine’s previous ‘diabetic’ food. Nonetheless, in promptly rebranding itself, Cheltine was able to appeal to a new segment of consumers and retain its market share. 

Nowadays, more rigorous controls and legislation around food and medicine mean that consumers are not duped to the same extent as their Victorian and Edwardian counterparts. Yet, the market is still saturated with ‘science-based’ commercial products, such as collagen supplements, nootropic drinks and activated charcoal, that we still know very little about. 

Therefore, studying food marketing from an historical perspective reminds us of the importance of reflecting upon and keeping a critical distance from such products, and the hope and promises that come with them. They could well be the Cheltine of the future.

Dr Lauren Alex O’Hagan is a Research Associate in the Department of Sociological Studies at University of Sheffield. Her research interests and publications focus largely on class conflict, literacy practices and consumer culture in late Victorian and Edwardian Britain, using a multimodal ethnohistorical lens.

Cover Image: Insulin pen, source: Unsplash, https://unsplash.com/photos/J4HwEwZtIs8

read more

‘Illegitimate’ Cultures: from the Music Hall to the Rave

dima-pechurin-EHWtxXpiDD0-unsplash

At first glance, mid-Victorian entertainment culture and the current ‘illegal’ rave scene of Covid-Britain may appear wildly incomparable. But the early Victorian period, as illustrated by the cultural division between the ‘music hall’ and ‘legitimate theatre’ was pivotal in cementing the division between ‘illegitimate’ and ‘legitimate’ culture. Understanding the historical drivers behind these definitions of culture is crucial to disentangling contemporary ‘public health’ policy from the influence of ‘moral panic.’  Distinguishing between the two can reveal the broader influence of dominant class anxieties about cultures which appear to challenge economic or social ‘norms’, of which early music hall and rave culture are both examples. 

The summer of 2020 witnessed stark contradictions in public health messages and policies.  Whilst an inevitable wave of ‘illegal’ outdoor rave gatherings were condemned and supressed by police forces, simultaneously the public were being encouraged (and subsidised) to ‘eat out’ in restaurants, despite indoor spaces being widely deemed a greater danger for viral transmission. This speaks volumes about the push to maintain the ideology of ‘legitimate culture’, defined by its relationship to free market economics (to which restaurant culture is wedded) as being more important than the scientific realities of public health.  

The first organised and uniformed police force emerged in 1829, playing a key role in shaping ‘legitimate’ modes of culture in the newly expanding towns and cities of the Victorian era.  Arising from a middle-class fear of the expanding working classes, early policing was born out of a desire to impose discipline outside of the confines of the workplace upon sites of ‘unregulated’ leisure time –on the street or in the ale house. In the context of the Chartist movement of the 1830s, which saw mass demonstrations calling for wider enfranchisement, a fear of the ‘unruly crowd’ and its potential to challenge state power remained present throughout the century. 

The larger, more commercially minded ‘Music Hall’ venue emerged out of the smaller ale houses and singing saloons of the late 18th and early 19th century urban milieu. Often tied closely to the brewing industry, music halls were associated with drinking, smoking and less ‘respectable’ behaviour.  Their perceived lack of legitimacy, compared to ‘legitimate’ theatres, where smoking and drinking were forbidden, was solidified by the 1843 Theatres Act. This Act stipulated that only venues holding a Theatre License, appointed by the Lord Chamberlain, could legally perform plays or performances with a ‘strong narrative’. This distinction between the music hall and theatre reflected the increasing tendency from the Victorian era upon centralising state control over censorship. 

The Eat Out To Help Out scheme of the summer of 2020 encouraged and subsidised the public to gather in restaurants, despite indoor spaces being deemed dangerous for viral transmission. Source: https://unsplash.com/photos/8pc6VvR0gJs, Photographer: Nick Fewings

Associated with large gatherings in rural locations, a large part of the anxiety that the rave scene is associated with may stem from its physical dislocation from the regulation and surveillance of the urban space, a legacy that can be traced back to Victorian policing. It has been argued that the government night time economy policies of the 1990s, which sought to replace rave culture with tighter social controls, explicitly took aim at rave culture, driving it into commercial club spaces that could be regulated through licensing, rendering rave more visible and therefore subject to greater monitoring in the public sphere. 

Furthermore, unlike the Victorian music hall and ‘legitimate’ theatre, rave culture possesses neither a stake in broader social nor in economic capital, existing (largely) outside of the regulated entertainment industry. This helps to explain rave culture’s consistent suppression following its height during the late 1980s and early 1990s.  Passed in response to the infamous rave at Castle Morton in 1992, the 1994 ‘Criminal Justice and Public Order Act’ gave sweeping powers to stop unlicensed gatherings of more than a hundred people, with an emphasis on supressing events which played loud music with ‘repetitive beats’ – an extremely unsubtle reference to rave culture. 

A telling quote from a raver involved in the scene of the time mentions the class politics at play in suppressing particular cultures, as well as the relationship between ‘legitimate’ culture and free-market economics: ‘If it had been a big event, [which] had been staged [and] had cost thousands of pounds it would have been all right[..]But because it was poor people, with no money, doing something they haven’t been granted permission for, suddenly it was the crime of the century.’ 

Unlike rave culture, Music Hall would eventually become more accepted through its increasing ‘commercialisation’ during the later 19th century as a national entertainment industry. Conscious attempts were made to prove Music Hall’s legitimacy through self-censorship, curating more ‘respectable’ content, and deploying surveillance to regulate crowd behaviour, as demonstrated by numerous statements on theatre bill posters proclaiming police would be ‘in attendance.’ 

Whether we understand or support the rave scene or not, ‘rave culture is culture.’  It is possible to be both critical of the public health practices of rave events (as indeed even many within the scene have been), as well as considering it a culture in all its complexity (for what is culture without its contradictions and problematic aspects?) 

Taking leisure culture, including rave culture seriously, brings into question the role of the state, and how it has historically influenced and enforced cultural norms, through both legislation and use of police force.  In both the music hall and rave culture, state suspicion and regulation has stemmed from a mistrust of forms of mass leisure that have risen ‘from below’; rave culture’s continued suppression, however, is in part due to its explicit refusal to ‘commercialise’ and become ‘respectable’ in the way that music hall did. In light of a recent investigation into a raver in Bristol being mauled by a police dog, asking serious questions about whose culture is given ‘legitimacy’, and the public health implications for this in the physical realm, has never been more pertinent.  

Izzy Hadlum is currently a History Masters student at the University of Sheffield.  Her research deals with entertainment culture in Mid-Victorian Sheffield, with a focus on the dynamic between respectability and class across Music Hall and Theatre.   

Cover Image: ‘Rave culture is culture’. Source: https://unsplash.com/photos/EHWtxXpiDD0, photographer: Dima Pechurin

read more

The State and the Pandemic: Spain and the 1889-1890 Flu

Plaza Mayor Madrid 1890

COVID-19 has brought the so-called Spanish Flu of 1918 sharply into the collective consciousness, but it was not the first worldwide pandemic to be faced by the modern state.

In the winter of 1889, a new type of flu came to Europe. Although it had originated in China, they called it ‘Russian Flu’ because, in November, newspapers – including those in Spain – reported that large numbers of people had fallen ill in St. Petersburg. It would take less than a month to reach Madrid.

With greatly improved transport links, unsurprisingly, it was suspected that the number of people travelling was responsible for its rapid spread. However, recent research has emphasised ‘that the important predictor of the speed of the pandemic is not the absolute numbers of passengers travelling between cities but the “connectedness” of the network of cities.[1] In other words, it only took of a small number of people to spread the flu so quickly across an increasingly interconnected continent.

There had been flu outbreaks in 1836/7 and 1848 but these were little remembered and, in 1889, the Spanish authorities were disastrously slow to react. Despite the press tracking its seemingly inevitable arrival, no preparations had been made. In fact, the flu had probably been circulating undetected for weeks before the government acknowledged it on 17 December. The consequences of this inaction are difficult to establish but, in a recent study, Sara García Ferrero suggests that 65% of all 6,180 deaths in Madrid in the nine weeks that followed can be attributed to the flu.[2] In Barcelona, as many as 52,000 caught the disease.[3]

Understanding of virology was in its infancy and early reports focussed on whether it was in fact flu or, perhaps, dengue fever. Even making allowance for this, official messaging was confused and, initially, the threat was played down. The Provincial Health Board of Madrid met the same afternoon as the government’s acknowledgment to discuss their response; it was remarkably sanguine. La Iberia reported that they had confirmed the presence of ‘a disease, with epidemic characteristics, of the flu or a severe cold, in a very benign form.’ This is particularly surprising considering that, for weeks, the newspapers had been carrying reports of the large numbers taken seriously ill elsewhere. Even more worrying, though, was their contradictory assertion that the ‘disease is not spread by contagion.’[4]

This may have been a deliberate attempt by state functionaries to manage the public reaction to the outbreak and there is further evidence of this phenomenon elsewhere. In Reus, for example, the authorities ordered that church bells no longer be rung for the dead to avoid spreading fear among the population.[5] It was, however, a difficult balance to strike. The endorsement of ‘cures’, such as ¡Pum! (Bang!) – a punch of rum and bitter orange – may have done more harm than good.

Some of the more concrete measures taken were also strikingly modern. Primary schools were closed and the Christmas holiday was extended for older students. A 250-bed field hospital was constructed at the present-day School of Engineering, off the Paseo de la Castellana in Madrid. What is particularly notable about these actions is that they were the same as those that had been taken elsewhere. Then, as now, there appeared to be an international consensus about the contours of state intervention. Nevertheless, although such intervention may have slowed the spread, it failed to stop it completely.

The authorities did nothing the limit public gatherings, perhaps for fear of economic damage, but it still came at a cost. On 22 December, La Correspondencía de España reported that as many as 600 soldiers of the Madrid garrison had fallen ill. Despite this, there were signs that a type of social distancing was happening intuitively. People decided to avoid public spaces; streets, shops and cafés were largely deserted, and theatres closed (though only because of high levels of sickness among the performers.)[6]

The longer-term, chronic impoverishment of the Spanish state meant that its capacity for a more exhaustive response was limited. Even the field hospital had to rely at least in part on private donations.[7]

The effects of the pandemic itself also significantly disrupted the provision of public services. Predictably, doctors were particularly vulnerable to catching the flu, but there were also high sickness rates among state officials. Paradoxically, though, some of this disruption served to limit the spread of the virus. Sickness rates among transport workers, for example, disrupted tram and railway services, involuntarily restricting the movement of people.

While these restrictions and relative wealth helped shield the middle class, the poor were disproportionally affected. Plainly because of overcrowding and poor sanitation, but also because the state’s penetration was weakest in the most deprived areas. The measures the authorities introduced had little effect on the lives of the residents there. In a quandary with sad parallels today, many had little choice but risk their health and continue to go out to work.

The flu of 1889-90 was nothing like as deadly as COVID-19, but there are remarkable similarities in the Spanish state’s response. Despite advances in understanding, most countries made similar early mistakes during the current pandemic to those Spain made then. In both cases, this can partly be explained by a lack of scientific knowledge about the threat, but most decisions are also political ones, with intended and unintended consequences.

Eventually the measures were lifted. But only late in January and only when the death rate had returned to normal. In 1890 the lessons had been learned; it remains to be seen whether they will be in 2020. And if they will be remembered more enduringly this time.

Dan Royle is an historian of nineteenth-century Spain. His PhD at the University of Sheffield is on 1848.

Cover Image: Plaza Mayor (ca. 1890), Memoria de Madrid

[1] Alain-Jacques Valleron, ‘Transmissibility and geographic spread of the 1889 influenza pandemic’, in Proceedings of the National Academy of Science of the U.S.A. 107/19 (2010) pp.8778–8781.

[2] Sara García Ferrero, ‘La gripe de 1889-1890 en Madrid’, Ph.D. thesis (Universidad complutense de Madrid, 2017), p.452.

[3] Bogumiła Kempińska-Mirosławska and Agnieszka Woźniak-Kosek, ‘The influenza epidemic of 1889–90 in selected European cities – a picture based on the reports of two Poznań daily newspapers from the second half of the nineteenth century’, in Medical Science Monitor 19 (2013), pp.1131–1141.

[4] ‘Noticias’, in La Iberia (18 December 1889), p.2.

[5] Quoted in Ferrero, ‘La gripe de 1889-1890’, p.38.

[6] La Correspondencia de España (22 December 1889), p.3; Ferrero, ‘La gripe de 1889-1890’, p.43.

[7] ‘Boletín sanitario’, in El Día (28 December 1889), p.1.

read more

COVID-19, ‘Big Government’, and the Prohibition of Alcohol: Crisis as a Transnational Moment for Social Change

Liquor_bottles_array (1)

Throughout history, crises have often led to enormous social and economic reform as policymakers are forced to come up with new ways to meet unexpected demands. As Walter Scheidel argues in his book, The Great Leveller (2017), mass violence has been the primary impetus for the decline of inequality throughout world history, most recently with the Second World War serving as a watershed in relation to increased government spending on social programmes in many of its participating states. Although a crisis of a very different nature, the current coronavirus pandemic has also brought about similar shifts, with governments running huge budget deficits to protect jobs and counteract the threat of a looming recession caused by travel restrictions and lockdowns.

We also witness cases where governments experiment with creative solutions to crises that stretch across borders, as is the case with the current global pandemic. For a variety of reasons, a small handful of countries have resorted to banning the sale of intoxicants. One of the most debated aspects of South Africa’s lockdown has been their prohibition on the sale of alcohol and cigarettes, intended to reduce hospital admissions and secure beds for COVID-19 patients. Admissions have dropped by two-thirds due to reductions in alcohol-related violence and accidents, but such draconian measures also meant the rise of black-market trade and the near-collapse of the country’s proud wine industry.

The sale of alcohol was also banned in the Caribbean island of Sint Maarten, a constituent country of the Netherlands, and in Nuuk, the capital of Greenland, over its role in exacerbating incidents of domestic violence that came with the lockdown. In Thailand, the prohibition on alcohol was put in place to prevent the spread of the virus in social gatherings. In each setting, such policies were deemed drastic but necessary, carefully implemented for their advantages in tackling a variety of health concerns whilst also considering their clear downsides.

Although instituted under entirely different circumstances, the First World War was also a moment when similarly harsh controls were imposed on the sale of alcohol across the world. France and Russia were the first to institute bans on absinthe and vodka, respectively, due to concerns over their impact on wartime efficiency. Countries in which anti-alcohol temperance movements were already influential also implemented tough restrictions of varying degrees. Although the production and sale of alcohol had already been banned in different local jurisdictions in Canada and the United States, a national prohibition came into fruition in both countries due to the war. Alcohol was not banned in Britain, but the country nevertheless instituted far-reaching controls on the distribution of drink under the Central Control Board (CCB), established in 1915 to enforce higher beverage duties and shorter closing hours in pubs.

In almost every instance, it was the context of the war that spurred the move towards instituting these tough restrictions. Temperance activists in North America had been pushing for a national prohibition for decades, but the conditions of the war, such as the rise of anti-German sentiment directed towards German-American breweries such as Anheuser-Busch, brought the federal implementation of prohibition to the forefront of national politics. In Britain, part of the CCB’s responsibility was the nationalisation of pubs and off-licenses situated in parts of the country that were of strategic importance to the war effort.

These contexts directly parallel what we’re seeing in South Africa and Thailand, where extraordinary circumstances necessitated extraordinary countermeasures. However, there is also an important difference that must be stressed: while current lockdown prohibitions are merely temporary, most advocates of prohibitions and controls a century ago believed that such measures were to be permanent, based on their view that there were no advantages to permitting the existence of ‘demon drink’ in society. The ban on the distillation of vodka instituted under Imperial Russia in 1914 was maintained after the October Revolution and was not scrapped until after Lenin, himself an ardent prohibitionist, died in 1924. Yet, within the British context, the First World War effectively reshaped alcohol licensing for several generations, as high beverage duties and shorter opening hours were mostly preserved into the interwar and postwar eras.

These cases highlight the broader implications of social and economic reforms that are being implemented today. Right-wing governments in both Britain and Japan have approved record levels of government spending in the form of economic aid and stimulus. As Bernie Sanders ended his bid for the Democratic nomination in April 2020, politicians of both the left and the right debated the federal implementation of universal healthcare and paid sick leave in light of the public health crisis. Most recently, the Spanish government announced a €3-billion-euro universal basic income scheme to stimulate the pandemic-hit economy through increased consumer spending. A columnist for The Washington Post was clearly onto something when he declared that ‘there are no libertarians in foxholes’.

It is, however, decidedly too early to predict the long-term impacts of COVID-19 and whether these will lead to what many hope to be a reversal of neoliberal reforms that have dominated economics since the 1970s. One cannot forget that the ‘Keynesian Resurgence’ in stimulus spending during the Financial Crisis of 2007-08 was immediately followed by the tragedy of the Eurozone Crisis and the traumas of austerity measures that devastated the public sectors of Greece, Spain, Italy, Britain, and so on. Despite that, the impact of abrupt changes in undermining the status quo cannot be underestimated, as we saw with the global ‘wave’ of alcohol prohibitions a century before. History, therefore, is an apt reminder of how crises are moments when ‘radical’ reforms that were previously only imagined can eventually become reality.

Ryosuke Yokoe is a historian of medicine, science, and public health, presently affiliated with the University of Sheffield as an honorary research fellow. He recently completed a PhD on the medical understandings of alcohol and liver disease in twentieth-century Britain. You can find him on Twitter @RyoYokoe1.

Cover image: Array of liquor bottles, courtesy of Angie Garrett, https://www.flickr.com/photos/smoorenburg/3312808594/ [accessed 28 May 2020].

read more
1 2
Page 1 of 2