close

Medical Humanities

50 Years of the Misuse of Drugs Act (1971)

freestocks-nss2eRzQwgw-unsplash

On 27 May, it is exactly fifty years since the Misuse of Drugs Act 1971 (MDA), the UK’s primary legislation for controlling drugs, received Royal Assent.

The Act arranged drugs into a three-tier classification system – A, B and C – with controls based on the perceived relative harm of different substances. Now the legislation is at the centre of a campaign by Transform Drug Policy who are calling for an overhaul of the law which the organisation considers having represented ‘50 years of failure’. 

One of the rationales behind the MDA was to consolidate the existing patchwork of legislation that had developed in the UK since the Pharmacy Act of 1868. This was the first time Parliament recognised a risk to the public from ‘poisoning’ and the 1868 Act distinguished between substances that were ‘toxic’ (poisons) and substances that were both ‘toxic’ and ‘addictive’ (‘dangerous drugs’). 

Some of these so-called ‘drugs of addiction’ were later subject to further controls under the Dangerous Drugs Act 1920 (DDA) which introduced prescriptions and criminalised unauthorised possession of opium, morphine, heroin and cocaine. 

Whilst this did represent a continuation of wartime drug control efforts it was also the result of a racist media-led panic around Chinese opium dens, as well as being a response to international moves toward uniformity on drug regulation. 

The DDA was later clarified by the Departmental Committee on Morphine and Heroin Addiction in their 1926 ‘Rolleston Report’. This formed an interpretation of the Act that became known as the ‘British System’, framing ‘drug addiction’ as a medical issue rather than a moral failing. 

By the 1950s, drugs were becoming increasingly connected in public consciousness with youth subculture and – especially in the tabloid press – black communities and the London jazz scene, stoking further moral panic. 

By 1958, the British Medical Journal observed that the regulations around drugs and poisons were already ‘rather complicated’.[1] This picture was complicated yet further by the 1961 UN Single Convention on Narcotic Drugs which laid out an international regime of drug control, ratified in the UK in 1964 by another Dangerous Drugs Act

Another committee was also formed under the Chairmanship of Lord Brain, ultimately leading to (yet another) Dangerous Drugs Act in 1967 which held onto the principles of the ‘British System’ but introduced new stipulations, such as requiring doctors to apply for a licence from the Home Office for certain prescriptions. 

During the 1960s, drugs continued to be associated in popular imagination with youth, with most attention by 1967 on the ‘Counterculture’ and ‘the hippies’, and in particular their use of cannabis and LSD. That same year, Mick Jagger’s country retreat in Redlands was raided by the drugs squad in a bust that was symbolic of a broader clash of ideologies.

The arrest and harsh sentencing of Jagger, Keith Richards and their friend Robert Fraser prompted William Rees-Mogg’s famous Times editorial ‘Who Breaks a Butterfly on a Wheel?’ on 1 July 1967. This became part of a wider public debate on drug use and on 16 July a ‘Legalise Pot’ rally took place in Hyde park, followed on 24 July by a full-page advert (paid for by Paul McCartney) in the Times calling for cannabis law reform.  

Imaginatively, the Government decided to convene another committee, this time under Baroness Wootton. Its report, published at the end of 1968, argued that whilst it did not think cannabis should be legalised, it should be made distinct in law from other illegal drugs. 

Finally in 1970, Home Secretary James Callaghan introduced a new Bill that was described during its passage through Parliament as an attempt to replace ‘…the present rigid and ramshackle collection of drug Acts by a single comprehensive measure’.[2] But the Bill was as ideological as it was pragmatic, and Callaghan himself had rejected the recommendations of Wootton.

The debates in both the Commons and the Lords indicate that not only did most Members of Parliament who spoke on the subject have little understanding of the complexities of drug use, but also that the theme of the ‘permissive society’ and its supposed excesses was central.

The Bill was approved in May 1971, given Royal Assent the same month and fully implemented after two more years. The Act also established the Advisory Council on the Misuse of Drugs (ACMD), tasked with keeping the drug situation in the UK under review. 

Successive governments have tended to accept the recommendations of the Council but there have been clashes, such as in 2009 when there was a total breakdown of relations when Professor David Nutt, then Chair of the Council, was sacked by Home Secretary Alan Johnson after Nutt had claimed – with substantial evidence – that MDMA and LSD were less dangerous than alcohol. 

For all of this, what has actually been the impact of the MDA? Well, as Simon Jenkins recently pointed out in a blog for the Guardian, 27,000 children and teenagers are now involved in ‘country lines’ drug gangs. Jenkins had previously described the MDA as a law that has done ‘less good and more harm’ than any other law on the statute book.

It is difficult to argue with this. Far from stemming recreational drug use, use of illegal drugs only increased after the MDA and became endemic in cities during the 1980s as heroin became a significant social issue. In 1979, the number of notified heroin users exceeded 1,000 for the first time. 

Over the 1980s and 1990s, drugs like MDMA were also increasingly used to enhance users’ experiences, especially in rave contexts, yet the Government line remained the same. As drug and harm reduction expert Julian Buchanan argued in 2000, ‘two decades of prevention, prohibition and punishment have had little noticeable impact upon the growing use of illegal drugs’.[3]

The MDA also deterred drug users from seeking help for fear of legal repercussions and limited the opportunities of countless young people. Last year, Adam Holland noted in the Harm Reduction Journal that in the UK, drug-related deaths were at the highest level on record and that although enormous time and money has gone into combating the illicit drugs trade, the market has not stopped growing.[4]

Writing thirty years after the MDA, Buchanan had argued that a ‘bold and radical rethink of UK drug policy’ was needed. Such a rethink never materialised. In 2019, the House of Commons Select Committee on Drug Policy concluded that ‘UK drugs policy is failing’. Now after half a century it might be time for real radical change, and the anniversary presents a great opportunity for this conversation to gain momentum. 

Hallam Roffey is a PhD Candidate in the Department of History at the University of Sheffield. His research looks at the idea of ‘acceptability’ in English culture between 1970 and 1990, examining changing attitudes around sexually explicit imagery, violent media, offensive speech and blasphemy. You can find Hallam on Twitter @HallamRoffey


[1] John Glaister and Edgar Rentoul, ‘The Control of the Sale of Poisons and Dangerous Drugs’, British Medical Journal (1958;2), p. 1525.

[2] House of Lords debate (October 1969), Hansard volume 790, cols 189-90.

[3] Julian Buchanan and L. Young, ‘The War on Drugs—A War on Drug Users’, Drugs: Education, Prevention, Policy 7:4 (2000), pp. 409-22.

[4] Adam Holland, ‘An ethical analysis of UK drug policy as an example of a criminal justice approach to drugs: a commentary on the short film Putting UK Drug Policy into Focus’, Harm Reduction Journal 17:97 (2020).

read more

‘Violent affections of the mind’: The Emotional Contours of Rabies

Rabies pic small

Living through the Covid-19 pandemic has more than drummed home the emotional dimensions of diseases. Grief, anger, sorrow, fear, and – sometimes – hope have been felt and expressed repeatedly over the last year, with discussions emerging on Covid-19’s impact on emotions and the affect of lockdown on mental health.

But emotions have long since stuck to diseases. Rabies – sometimes called hydrophobia – is a prime example.[i] In nineteenth-century Britain, France, and the United States, rabies stoked anxieties. Before the gradual and contested acceptance of germ theory at the end of the nineteenth century, some doctors believed that rabies had emotional causes.

For much of the nineteenth century, the theory that rabies generated spontaneously jostled with the one that held that it was spread through a poison or virus. The spontaneous generation theory stressed the communality of human and canine emotions. Rather than contagion through biting, emotional sensitivity made both species susceptible to the disease.

A sensitive person prone to emotional disturbances was considered particularly at risk from external influences that might cause rabies to appear. “Violent affections of the mind, operating suddenly and powerfully on the nervous system” could in rare cases lead to rabies or, at the very least, exacerbate the symptoms in nervous patients, according to Manchester physician Samuel Argent Bardsley (who was more commonly known for promoting quarantine as a way of containing the disease).

For one Lancashire man, John Lindsay, the difficulty of feeding his family drove him to anxiety and despair, exacerbated by a bout of overwork and a lack of food. Fatigued, suffering from headaches, and fearing liquids, Lindsay remembered being bitten by a supposed mad dog some twelve years previously. Amidst violent spasms, visions of the black dog “haunted his imagination with perpetual terrors” and made recovery seem “hopeless.” With reluctance, Bardsley concluded that this was a case of spontaneous rabies. Emotional distress and an overactive imagination had caused and aggravated the disease.

During the mid-nineteenth century prominent London doctors argued that rabies was closely linked to hysteria and had emotional and imaginative origins, much to the chagrin of veterinarian William Youatt, the leading opponent of theories of spontaneous generation.[ii] In the 1870s alienists (otherwise known as psychiatrists) then lent greater intellectual credibility to theories of rabies’ emotional aetiology. They stressed the powerful sway that emotions and the mind held over individuals, especially in the enervating conditions of modern life.

Physician and prominent British authority on mental disorders Daniel Hack Tuke argued that disturbing emotions and images could create hydrophobic symptoms in susceptible individuals. Referencing Bardsley, and drawing on French examples, he argued that “such cases illustrate the remarkable influence exerted upon the body by what is popularly understood as the Imagination.” The very act of being bitten by a dog and the “fearful anticipation of the disease” was enough to spark rabies , even if the dog was not rabid. Even rational and emotionally-hardy doctors had reported suffering from hydrophobic symptoms when recalling the appalling scenes of distress during the examination and treatment of hydrophobic patients.[iii] 

Tuke suggested that in some cases excitement or other forms of mental, emotional, and sensory overstimulation could activate the virus years after a bite from a rabid dog. He drew on a striking case from the United States, as reported by the Daily Telegraph in 1872. A farmer’s daughter had been bitten by a farm dog when choosing chickens for slaughter. The wound healed and no signs of rabies appeared until her wedding day two months later. The “mental excitement” of this life-changing event brought on a dread of water. After the ceremony she experienced spasms and “died in her husband’s arms.”

Tuke reproduced the newspaper’s view, and more generalized gendered assumptions about female emotional delicacy, that such “nervous excitement” had a profound influence on the “gentler” sex. In this case, her nerves were considered to have been exacerbated by the anticipation of the impending wedding night, which was often framed as an emotionally fraught sexual encounter.[iv]

Dr William Lauder Lindsay of the Murray Royal Asylum in Perth, Scotland, was another prominent proponent of the view that rabies was a predominately emotional disease. The disease, he argued, “is frequently, if not generally, the result of terror, ignorance, prejudice, or superstition, acting on a morbid imagination and a susceptible nervous temperament.” Under the sway of their overactive imagination, an individual could take on “canine proclivities,” such as barking and biting. In classist language, Lindsay argued that rabies showed the influence of mind over the body, especially in the “lower orders of the community.”[v]

The British alienists’ depiction of rabies as a predominately emotional disorder made its way across the Atlantic. In the mid-1870s Dr William A. Hammond, President of the New York Neurological Society and leading American authority on mental disorders, stated that the evidence from Europe suggested that heightened emotions might cause rabies in humans. More generally, New York physicians and neurologists debated whether or not individuals had died from actual rabies or fears of the disease, and discussed how fear might turn a bite from a healthy animal into death.[vi]

The alienists lent greater credibility to earlier theories that rabies anxieties could lead to imaginary or spurious rabies. Tuke asserted that fears of rabies could create an imaginary manifestation of the disease. “Hydrophobia-phobia” demonstrated clearly the “action of mind upon mind,” and was distinct from the “action of the mind upon the body” in those cases when emotional distress led to actual rabies.

Echoing Tuke, Lindsay identified women as a particular vector in triggering spurious rabies. He asserted that they spread rabies fears, as supposedly shown by an Irishwomen in Perth who had frightened her husband into believing he had rabies. For Lindsay, this was a classic case of spurious (or false) rabies, which required the rational and firm intervention of medical men, such as himself, to stamp out. But he felt himself fighting an unstoppable tide. For in America, as well as Britain, the press ignited fears and created spurious rabies in susceptible individuals.[vii]

Lindsay and Tuke believed that rabies could, in some cases, be transmitted by dogs to humans through biting and “morbid saliva.” But some doctors controversially argued that it was a purely emotional disease. Eminent Parisian doctor Édouard-François-Marie Bosquillon set the tone in 1802 when he confidently declared that rabies in humans was caused solely by terror. His observation that individuals were struck with hydrophobic symptoms, including “loss of reason” and convulsive movements,” at the sight of a mad dog provided sufficient proof.

Horror-inducing tales of rabies, fed to children from a young age, created fertile conditions for the development of the disease, particularly in “credulous, timid and melancholic” people. Gaspard Girard, Robert White, William Dick, and J.-G.-A. Faugére-Dubourg developed this line of argument as the century progressed. And the theory had traction. In the 1890s, Philadelphian neurologist Charles K. Mills insisted that rabies was purely a disease of the nerves. Such theories were, however, contentious, and Tuke cautioned against those who asserted that rabies was solely an imaginary disease.[viii]

Nonetheless, these theories cemented rabies as an emotionally-fraught disease and reinforced the dangers of dogs bites: even a bite from a healthy dog could trigger a lethal neurological reaction in the swelling ranks of anxious individuals. 

Dr Chris Pearson is Senior Lecturer in Twentieth Century History at the University of Liverpool. His next book Dogopolis: How Dogs and Humans made Modern London, New York, and Paris is forthcoming (2021) with University of Chicago Press. He runs the Sniffing the Past blog and you can download a free Android and Apple smart phone app on the history of dogs in London, New York, and Paris. You can find Chris on Twitter @SniffThePastDog.


Cover image: ‘Twenty four maladies and their remedies’. Coloured line block by F. Laguillermie and Rainaud, ca. 1880. Courtesy of the Wellcome Collection, https://wellcomecollection.org/works/pysjar4f/images?id=mpqquvrh [accessed 25 March 2021].

[i] Contemporaries sometimes used “rabies” and “hydrophobia” interchangeably to refer to the disease in animals and dogs, but sometimes used “rabies” to refer to the disease in dogs and “hydrophobia” for humans. With the rise of germ theory at the end of the nineteenth century, “rabies” gradually replaced “hydrophobia.” For simplicity’s sake, I will use “rabies” to refer to the disease in humans and animals unless I quote directly from a historical source.

[ii] Samuel Argent Bardsley, Medical Reports of Cases and Experiments with Observations Chiefly Derived from Hospital Practice: To which are Added an Enquiry into the Origin of Canine Madness and Thoughts on a Plan for its Extirpation from the British Isles (London: R Bickerstaff, 1807), 238-50, 284, 290; “Hydrophobia”, The Sixpenny Magazine, February 1866; Neil Pemberton and Michael Worboys, Rabies in Britain: Dogs, Disease and Culture, 1830-2000 (Basingstoke: Palgrave Macmillan, 2013 [2007]), 61-3.

[iii] Daniel Hack Tuke, Illustrations of the Influence of the Mind Upon the Body in Health and Disease Designed to Elucidate the Action of the Imagination (Philadelphia: Henry C. Lea, 1873), 198-99, 207.

[iv] Tuke, Illustrations,200-1; Daily Telegraph, 11 April 1872; Peter Cryle, “‘A Terrible Ordeal from Every Point of View’: (Not) Managing Female Sexuality on the Wedding Night,” Journal of the History of Sexuality 18, no. 1 (2009): 44-64.

[v] William Lauder Lindsay, Mind in the Lower Animals in Health and Disease, vol. 2 (London: Kegan Paul, 1879), 17; William Lauder Lindsay, “Madness in Animals,” Journal of Mental Science 17:78 (1871), 185; William Lauder Lindsay, “Spurious Hydrophobia in Man,” Journal of Mental Science 23: 104 (January 1878), 551-3; Pemberton and Worboys, Rabies, 96-7; Liz Gray, “Body, Mind and Madness: Pain in Animals in the Nineteenth-Century Comparative Psychology,” in Pain and Emotion in Modern History, ed. Rob Boddice (Basingstoke: Palgrave, 2014), 148-63.

[vi] “Hydrophobia: The Subject Discussed by Medical Men,” New York Times, 7 July 1874; Jessica Wang, Mad Dogs and Other New Yorkers: Rabies, Medicine, and Society in an American Metropolis, 1840-1920. (Baltimore: Johns Hopkins University Press, 2019), 150-1.

[vii] Tuke, Illustrations, 198-99; Lindsay, “Spurious Hydrophobia in Man,” 555-6, 558.

[viii] Lindsay, Mind in the Lower Animals, 176; Édouard-François-Marie Bosquillon, Mémoire sur les causes de l’hydrophobie, vulgairement connue sous le nom de rage, et sur les moyens d’anéantir cette maladie (Paris: Gabon, 1802), 2, 22, 26; Vincent di Marco, The Bearer of Crazed and Venomous Fangs: Popular Myths and Delusions Regarding the Bite of the Mad Dog (Bloomington: iUniverse, 2014), 141-47; Pemberton and Worboys, Rabies, 64; Tuke, Illustrations, 198-99; Wang, Mad Dogs, 151-2.

read more

Cheltine: The Diabetic Food That Wasn’t

Insulin

Cheltine: The Diabetic Food That Wasn’t

Today, the use of science in food and drink marketing is so commonplace that we take it as a given, but this was not the case in the nineteenth century. In Britain, the introduction of scientific discourse into advertising following the Great Exhibition of 1851 was innovative and novel, creating an artificial demand for products on the basis that trusting consumers thought that they would improve their lives. However, in many cases, these products were little more than placebos.

At the time, diabetes was frequently mentioned in the popular press, with many articles warning healthy people that they might be ‘unconscious victims’ if they suffered from ‘thirst, depression, lassitude, irritability without apparent cause and diminution of both physical and mental faculties’. 

Most physicians argued about the importance of diet in diabetes management, yet nobody could agree on the best solution. For some, high-carbohydrate diets were seen as most effective for diabetic patients, while for others, diets high in fat and animal protein had more positive outcomes. Others still advocated radical diets that promoted starvation (liquids-only diet) as an effective treatment.

It was this lack of consensus amongst medical professionals, coupled with a growing public interest in science, that led canny manufacturers to produce ‘diabetic foods’. From 1880 onwards, the market became saturated with diabetic water, bread, flour, biscuits, chocolate and even whiskey. On the whole, these foods were not especially adapted for diabetics; instead, they reflected a desire to tap into consumer health concerns for purely financial gain.

In Britain, the leading ‘diabetic’ food brand of the early 20th century was Cheltine, which was launched by the Worth’s Food Syndicate of Cheltenham in December 1899 after a competition to name a new range of diabetic foods. Cheltine was a brown granular powder that could be stirred into hot milk or water and, according to its product launch announcement, was ‘harmless, flesh-forming and palatable’. It came in an eye-catching red and yellow tin, which featured a mock coat of arms with images of trees to suggest that it was natural, pure and healthy.

Advertisement for Cheltine (1900), source: Smithsonian National Museum of American History, https://americanhistory.si.edu/collections/search/object/nmah_1756685 

Almost initially upon launch, Cheltine was called out by The Lancet (30 June 1900) for its misleading packaging, which stated that the food ‘cannot turn into sugar’. Cheltine responded instantly by claiming that this was merely a clerical error.

Nonetheless, just one week later, the brand was challenged once again by The Lancet (6 July 1900) for claiming in advertisements that its starch was so modified that it could not be converted into diabetic sugar during digestion. Dr Dixon expressed concern that not only was this claim false, but that it also spread the dangerous belief that starch was suitable for diabetic patients.

Cheltine immediately tried to counter these accusations by building a case for its credibility. It exhibited its products at commercial fairs, promoted its baker Mr N.J. Bloodworth as a ‘scientific baker’ and used testimonials from supposedly satisfied customers in its advertisements. However, these testimonials, which claimed that the diabetics were ‘perfectly cured within months’, only used initials, making it very difficult to trace their identities if indeed they were real at all. 

Cheltine’s bold claims were also propagated by a 1904 Industrial Gloucestershire publication, which described Cheltine as a ‘pioneer’ that had helped diabetics gain a complete recovery thanks to its ‘special peculiarities both in nature, proportion of ingredients and manner of treatment’. 

Again, doctors struck back. The British Medical Journal (10 March 1906; 21 May 1910) described these assertions as ‘startling’ and warned that Cheltine should not be recommended for diabetics, as its chemical composition was unsafe. Moreover, the product was ‘unpleasant to the taste’ with its ‘rancid almond flavour’ and appearance of ‘pulverised rusk or toast’, which made it wholeheartedly ‘unworthy of consideration’. This verdict was far removed from Cheltine’s own description in its advertisements as a ‘perfect cooked food’ that was ‘thoroughly pleasant to the taste’.

Despite the constant concerns of doctors, Cheltine’s diabetic food remained popular throughout the first two decades of the 20th century and advertisements were featured regularly in the local and national press. However, this came to an abrupt halt in the 1920s following the discovery of insulin by Dr Frederick Banting and Dr Charles Best. 

Insulin helped diabetics to better manage their condition, allowing for more flexibility in diet and extending life expectancy. Now, with a bona fide medical solution, diabetics began to rely increasingly on insulin and medical advice rather than on patent foods like Cheltine.

But Cheltine was not prepared to give up so easily. It quickly turned its attention instead to anaemia, marketing what it called a ‘medicated food’ that would ‘regenerate the blood, revive drooping strength and diminish languor’. Unsurprisingly, there was no scientific evidence to support this claim and there was little to distinguish the ‘anaemic’ food from Cheltine’s previous ‘diabetic’ food. Nonetheless, in promptly rebranding itself, Cheltine was able to appeal to a new segment of consumers and retain its market share. 

Nowadays, more rigorous controls and legislation around food and medicine mean that consumers are not duped to the same extent as their Victorian and Edwardian counterparts. Yet, the market is still saturated with ‘science-based’ commercial products, such as collagen supplements, nootropic drinks and activated charcoal, that we still know very little about. 

Therefore, studying food marketing from an historical perspective reminds us of the importance of reflecting upon and keeping a critical distance from such products, and the hope and promises that come with them. They could well be the Cheltine of the future.

Dr Lauren Alex O’Hagan is a Research Associate in the Department of Sociological Studies at University of Sheffield. Her research interests and publications focus largely on class conflict, literacy practices and consumer culture in late Victorian and Edwardian Britain, using a multimodal ethnohistorical lens.

Cover Image: Insulin pen, source: Unsplash, https://unsplash.com/photos/J4HwEwZtIs8

read more

COVID-19, ‘Big Government’, and the Prohibition of Alcohol: Crisis as a Transnational Moment for Social Change

Liquor_bottles_array (1)

Throughout history, crises have often led to enormous social and economic reform as policymakers are forced to come up with new ways to meet unexpected demands. As Walter Scheidel argues in his book, The Great Leveller (2017), mass violence has been the primary impetus for the decline of inequality throughout world history, most recently with the Second World War serving as a watershed in relation to increased government spending on social programmes in many of its participating states. Although a crisis of a very different nature, the current coronavirus pandemic has also brought about similar shifts, with governments running huge budget deficits to protect jobs and counteract the threat of a looming recession caused by travel restrictions and lockdowns.

We also witness cases where governments experiment with creative solutions to crises that stretch across borders, as is the case with the current global pandemic. For a variety of reasons, a small handful of countries have resorted to banning the sale of intoxicants. One of the most debated aspects of South Africa’s lockdown has been their prohibition on the sale of alcohol and cigarettes, intended to reduce hospital admissions and secure beds for COVID-19 patients. Admissions have dropped by two-thirds due to reductions in alcohol-related violence and accidents, but such draconian measures also meant the rise of black-market trade and the near-collapse of the country’s proud wine industry.

The sale of alcohol was also banned in the Caribbean island of Sint Maarten, a constituent country of the Netherlands, and in Nuuk, the capital of Greenland, over its role in exacerbating incidents of domestic violence that came with the lockdown. In Thailand, the prohibition on alcohol was put in place to prevent the spread of the virus in social gatherings. In each setting, such policies were deemed drastic but necessary, carefully implemented for their advantages in tackling a variety of health concerns whilst also considering their clear downsides.

Although instituted under entirely different circumstances, the First World War was also a moment when similarly harsh controls were imposed on the sale of alcohol across the world. France and Russia were the first to institute bans on absinthe and vodka, respectively, due to concerns over their impact on wartime efficiency. Countries in which anti-alcohol temperance movements were already influential also implemented tough restrictions of varying degrees. Although the production and sale of alcohol had already been banned in different local jurisdictions in Canada and the United States, a national prohibition came into fruition in both countries due to the war. Alcohol was not banned in Britain, but the country nevertheless instituted far-reaching controls on the distribution of drink under the Central Control Board (CCB), established in 1915 to enforce higher beverage duties and shorter closing hours in pubs.

In almost every instance, it was the context of the war that spurred the move towards instituting these tough restrictions. Temperance activists in North America had been pushing for a national prohibition for decades, but the conditions of the war, such as the rise of anti-German sentiment directed towards German-American breweries such as Anheuser-Busch, brought the federal implementation of prohibition to the forefront of national politics. In Britain, part of the CCB’s responsibility was the nationalisation of pubs and off-licenses situated in parts of the country that were of strategic importance to the war effort.

These contexts directly parallel what we’re seeing in South Africa and Thailand, where extraordinary circumstances necessitated extraordinary countermeasures. However, there is also an important difference that must be stressed: while current lockdown prohibitions are merely temporary, most advocates of prohibitions and controls a century ago believed that such measures were to be permanent, based on their view that there were no advantages to permitting the existence of ‘demon drink’ in society. The ban on the distillation of vodka instituted under Imperial Russia in 1914 was maintained after the October Revolution and was not scrapped until after Lenin, himself an ardent prohibitionist, died in 1924. Yet, within the British context, the First World War effectively reshaped alcohol licensing for several generations, as high beverage duties and shorter opening hours were mostly preserved into the interwar and postwar eras.

These cases highlight the broader implications of social and economic reforms that are being implemented today. Right-wing governments in both Britain and Japan have approved record levels of government spending in the form of economic aid and stimulus. As Bernie Sanders ended his bid for the Democratic nomination in April 2020, politicians of both the left and the right debated the federal implementation of universal healthcare and paid sick leave in light of the public health crisis. Most recently, the Spanish government announced a €3-billion-euro universal basic income scheme to stimulate the pandemic-hit economy through increased consumer spending. A columnist for The Washington Post was clearly onto something when he declared that ‘there are no libertarians in foxholes’.

It is, however, decidedly too early to predict the long-term impacts of COVID-19 and whether these will lead to what many hope to be a reversal of neoliberal reforms that have dominated economics since the 1970s. One cannot forget that the ‘Keynesian Resurgence’ in stimulus spending during the Financial Crisis of 2007-08 was immediately followed by the tragedy of the Eurozone Crisis and the traumas of austerity measures that devastated the public sectors of Greece, Spain, Italy, Britain, and so on. Despite that, the impact of abrupt changes in undermining the status quo cannot be underestimated, as we saw with the global ‘wave’ of alcohol prohibitions a century before. History, therefore, is an apt reminder of how crises are moments when ‘radical’ reforms that were previously only imagined can eventually become reality.

Ryosuke Yokoe is a historian of medicine, science, and public health, presently affiliated with the University of Sheffield as an honorary research fellow. He recently completed a PhD on the medical understandings of alcohol and liver disease in twentieth-century Britain. You can find him on Twitter @RyoYokoe1.

Cover image: Array of liquor bottles, courtesy of Angie Garrett, https://www.flickr.com/photos/smoorenburg/3312808594/ [accessed 28 May 2020].

read more

Dawson’s ‘Big Idea’: The Enduring Appeal of the Primary Healthcare Centre in Britain

Retford

May 2020 marks the centenary of the publication of the Interim Report of the Consultative Council on the Future of Medical and Allied Services, popularly known as the Dawson report after its principal author, Lord Dawson of Penn.[i] The report, commissioned in 1919 by the newly established Ministry of Health, outlined a plan to bring together existing services funded by national health insurance, local authorities, and voluntary bodies in a coherent and comprehensive healthcare system. The final report was never published, being consigned to oblivion by a worsening economy and changed political climate. Though cautiously welcomed by professional leaders, Dawson’s plan was condemned by a hostile press as grandiose and unaffordable.[ii] However, recent NHS policy directives regarding Integrated Care Systems show that the principal task which Dawson’s group had set itself, that of successfully integrating primary, secondary and ‘allied’ health services, is one with which NHS leaders are still grappling today.[iii]

Lord Dawson of Penn, courtesy of the British Medical Association archive

Central to Dawson’s plan, and its most revolutionary idea, was the creation of a network of ‘primary health centres’ (PHCs) in each district in which general practitioners (GPs) could access diagnostic, surgical, and laboratory facilities for their patients and which would also house infant welfare and maternity services, facilities to promote physical health, and space for administration, records, and postgraduate education. GPs and other professionals would see and treat patients at PHCs, referring only complex cases to specialists at secondary care centres (essentially district hospitals) located in large towns, while patients needing the most specialized treatment would be referred to regional teaching hospitals with attached medical schools. This ‘hub and spoke’ model is one to which recent generations of NHS health planners have returned time and again, seemingly unaware of its antecedents.

A firm believer in teamwork, Dawson hoped that collaborative use of PHCs by GPs would encourage group practice and multi-disciplinary working. But the individualistic nature of general practice at that time meant GPs remained wary of his ideas, despite the fact that examples of PHCs already existed in Gloucestershire and in Scotland and many of the facilities they were meant to comprise could be found in GP-run cottage hospitals and Poor Law infirmaries.[iv] Experiments with architect-designed health centres in the 1920s and 1930s failed to elicit a major change in professional or governmental attitudes.[v] In 1948 the NHS brought public, voluntary and local authority hospitals under state control but in its early years the promise of new PHCs remained largely unrealised.[vi] Proprietorial traditions and fear of local government control led to a mushrooming of purpose- built, GP-owned practice premises between the late 1960s and 1990s independently of local authority-owned health centres, for which there was a major building programme in the 1970s.[vii]

Illustration of a Primary Health Centre, from the Dawson Report, courtesy of the BMA archive

Although by the late twentieth century the Dawson report had largely been forgotten, interest in PHCs resurfaced in the early 2000s with a major investment in primary healthcare facilities through the establishment of Local Improvement Finance Trusts (LIFT). These were a form of private finance initiative designed to provide state of the art community health and social care hubs housing GP practices and other services. Unfortunately, LIFT buildings proved more expensive than anticipated and their facilities, intended to promote the transfer of work from secondary to primary care, were often underutilised.[viii] While these were being constructed, the Labour health minister, Lord Ara Darzi, announced the establishment of a number of ‘polyclinics’, bearing a close resemblance to Dawson’s PHC idea. However, the Darzi Centres that were established were either mothballed or repurposed, being condemned as an expensive ‘white elephant’ by professional leaders.[ix]

In the last few years a ‘quiet revolution’ has been taking place in the NHS in England involving attempts to dismantle the financial and institutional barriers between primary, secondary and community care created by the internal market. Its byword, ‘Integration’, echoes Dawson’s overriding goal and the ‘hub and spoke model’ he advocated is now well established. Meanwhile, the pressures of unending demand have forced GPs to collaborate as healthcare providers in locality groups called Primary Care Networks (PCNs). Though guidance on these is not prescriptive, some PCNs have adopted the idea of a community ‘hub’ housing shared diagnostic and treatment facilities much as Dawson had envisaged.[x]

While the full impact of COVID-19 on our struggling health services is still unknown, the abiding necessity for all parts of the NHS to collaborate, communicate and mutually support each other during this crisis underlines the value and relevance of Dawson’s vision of integrated services. It remains to be seen if, in its aftermath, his ‘big idea’ of ubiquitous multi-purpose PHCs will come any closer to being realised.

Chris Locke is a fourth year PhD student in the History Department at the University of Sheffield. His research is focused on the political consciousness of British GPs and their struggle for professional self-determination in the early Twentieth Century.

Cover image: LIFT -built Primary Care Centre, Retford, Nottinghamshire, photographed by the author.

[i] Interim Report of the Consultative Council on the Future of Medical and Allied Services, Cmd 693 HMSO  1920. For an account of the origins and significance of the report see Frank Honigsbaum, The Division in British Medicine (London, 1979) chapters 6-12.

[ii] The British Medical Association’s blueprint for health services reform, A General Medical Service for the Nation (1930) and the report by Political and Economic Planning, The British Health Services (1937) both referenced the Dawson report, and it clearly influenced the Beveridge report, Social Insurance and Allied Services (1942).

[iii] https://www.kingsfund.org.uk/publications/making-sense-integrated-care-systems (last accessed 3 April 2020)

[iv] The report referenced the hub and spoke model of healthcare facilities overseen by Gloucestershire County Council’s Medical Officer of Health, Dr J Middleton Martin. Commentators also noted similarities with Sir James McKenzie’s Primary Care Clinic in St Andrews and Trade Union-run Medical Aid Institutes in South Wales.

[v] Jane Lewis and Barbara Brookes, ‘A Reassessment of the Work of the Peckham Health Centre 1926-1951’, Health and Society vol 61, 2, 1983 pp.307-350; For Finsbury Health Centre see A B Stewart, ‘Health Centres of Today’, The Lancet, 16 March 1946 pp. 392-393.

[vi] For one exception see R H Parry et al, ‘The William Budd Health Centre: the First Year’, British Medical Journal, 15 March 1954 pp.388-392.

[vii] BMA General Practitioners Committee guidance: The Future of GP Practice Premises (Revised 2010)

[viii] Nottinghamshire Local Medical Committee, NHS LIFT in Nottinghamshire (Nottingham,1997)

[ix] Peter Davies, ‘Darzi Centres: an expensive luxury the UK can no longer afford?’, British Medical Journal, 13 November 2010, 341; c6237.

[x] https://www.england.nhs.uk/primary-care/primary-care-networks/ (last accessed 3 April 2020)

 

read more

Jonas Salk turns 105: Some thoughts on lessons from history

salk1

Jonas Salk would have been 105 today, 28 October. He is remembered as the inventor of the polio vaccine who, when asked how much money he stood to make, declared: ‘There is no patent. Could you patent the sun?’

Of course, “it’s more complicated than that”. Salk was part of a multi-national, multi-agency project to develop prophylactics. Without the use of “his” injectable vaccine and the oral vaccine developed by rivals on the other side of the iron curtain, humanity would not be on the verge of eliminating polio. (For more on that story, see the excellent book by Dora Vargha.)[1] And one of the reasons Salk didn’t patent the vaccine was that it was unpatentable.

But let’s not be uncharitably pedantic. It is, after all, his birthday.

In the wake of recent reports of resurgent infectious diseasesincluding polio – vaccination is back in the news. (If, indeed, it ever went away). Matt Hancock, the UK’s Secretary of State for Health and Social Care, has suggested the government might consider mandatory vaccination. Public health experts have cautioned against this, using (in part) historical evidence. In the nineteenth century, compulsory vaccination generated a well-organised, vocal and occasionally violent anti-vaccination movement,[2] the effects of which still haunt Britain’s public health authorities.

Public health has taken its lessons from high-profile examples of crisis – smallpox, pertussis or measles to name but three.[3] But not all problems come from rejection of vaccines. With polio in the 1950s, the problem was the government’s inability to meet demand.

Salk’s vaccine (yes, we’ll give him credit here – after all, contemporaries referred to the inactivated poliomyelitis vaccine simply as “Salk”) became commercially available in 1955. The British government announced with great fanfare that it would provide the vaccine for free to all children and young adults. There was clear demand for it. This invention – in the same vein as space exploration and penicillin – was a marker of modernity, the power of science to solve once-intractable problems.

Unfortunately, there was not enough to go around. In 1955, a manufacturing defect by Cutter Pharmaceuticals resulted in the accidental infection of hundreds of American children. As a result, the British banned American imports and chose to use domestic factories to produce a “safer” form of the vaccine.[4] But Britain didn’t have the capacity to produce enough doses in time. Shortages created complaints from the British press and parents, and – despite the demand – few registered for the vaccine because of the long waiting lists and inconvenience.

As proof of the demand for the vaccine – despite the Cutter incident – local authorities were swamped with requests when high-profile cases made the news. The death of professional footballer Jeff Hall showed even fit, young people could be affected and created a surge in numbers of younger adults presenting themselves and their children for the jab. In the ensuing shortages, the health minister blamed people for their apathy – if they’d just done as they were told when they were told, the government could have better distributed the vaccine over the course of the year. This did not go down well as a public relations exercise.

This crisis was eventually overcome through the introduction of the oral polio vaccine in the early 1960s. Taken on a sugar cube, parents were much more willing to present their children. It was a quick process that could be done anywhere; it didn’t hurt (though its taste was somewhat to be desired); and it could be manufactured so easily, and in such volume, that there was no need to wait around for months for the next batch to become available.

Of course, all historical circumstances are different. Anti-vaccination literature is certainly more visible than it was in the 1950s. Populations are more mobile. The immediate memory – even fear – of certain infectious diseases has faded.

At the same time, the intriguing part of this history – at least to this historian – is not why people don’t vaccinate their kids. It’s why so many do.[5] The vast majority of children receive some form of vaccination – upwards of 95 per cent – even if they do not always complete every course in the recommended time frame.

The great improvements in vaccination rates over the past 70 years have come from better administration. Easier-to-administer vaccines. More-robust procedures for following up on missed appointments. Advertising. Having local health professionals answer the specific questions and concerns individual parents might have. Following up with patients who might otherwise slip through the surveillance of public health authorities (such as those who do not speak English, regularly change addresses, have other acute social care needs). All these things required resources which have been squeezed significantly since public health was reintegrated into already-struggling local authorities.

It would be unwise for a historian to say that this is the cause of the problems, or that extra funding will provide a magic-bullet solution.

It is, however, worth reminding ourselves that crises in vaccination policy are not new. We have experienced them before. And not all of them have been due to a lack of demand or fear of a particular vaccine. The 1950s polio example shows us that more practical issues can be at play, and that the public and its collective behaviour are not necessarily at the root of them.

Gareth Millward is a Wellcome Research Fellow at the Centre for the History of Medicine at the University of Warwick. He has worked on the history of disability, public health, vaccination and most recently sick notes. His book Vaccinating Britain was published in January 2019 by Manchester University Press.

[1] Dora Vargha, Polio across the Iron Curtain: Hungary’s Cold War with an Epidemic (Cambridge: Cambridge University Press, 2018).

[2] Nadja Durbach, Bodily Matters: The Anti-Vaccination Movement in England, 1853–1907 (Durham: Duke University Press, 2005).

[3] Stuart Blume, Immunization: How Vaccines Became Controversial (London: Reaktion, 2017).

[4] Hannah Elizabeth, Gareth Millward and Alex Mold, ‘’Injections-While-You-Dance’: Press advertisements and poster promotion of the polio vaccine to British publics, 1956-1962’, Cultural and Social History 16:3 (2019): 315-36.

[5] Gareth Millward, Vaccinating Britain: Mass Vaccination and the Public Since the Second World War (Manchester: Manchester University Press, 2019), p. 1.

read more
1 2
Page 1 of 2