close

British History

The Long Fall of King Coal

Longannet_Power_Station_7_December_2011 (1)

When did Britain’s age of coal come to an end? A commonsensical answer to this question is likely to appeal to the decisive defeat the miners suffered during the great strike of 1984-5 and the swift closure of collieries that followed in the decade after.

Energy scholars such as Timothy Mitchell are more likely to point to the transition towards an oil economy in the immediate postwar period.[1] Long before the mid-1980s, Britain had become a car-driven society dependent on petrochemical manufacturing processes and oil had even begun to play a significant role in Britain’s electricity generation by the early 1970s.[2]

King coal’s fall was certainly longer than a story of rapid contraction allows for, but rather than being squarely located in an earlier time-period, it is a story that stretches into the present. British coal production and employment peaked at almost 300 million tons and over a million miners during the second decade of the twentieth century and has been in more or less sustained contraction since the early 1920s. It was only in 2020, during the midst of lockdown, that Britain went without coal-fired electricity for two months for the first time in over 130 years.

These developments are a sign of things to come. Britain is on track to end coal-fired electricity by the mid-2020s. Scotland’s last coal power station, Longannet, closed in 2016. Fourteen years earlier, in 2002, the curtain was brought down on a centuries-long historical saga when miners rose from the last of the drift mines dug to supply Longannet for the final time. This brought Scottish deep coal mining to an end.

I was finalising my PhD thesis on deindustrialization in Scotland’s coalfields when Longannet power station closed. My research included several interviews with men who had worked at the complex and were among the nation’s last miners. My first monograph was published this year, Coal Country: The Meaning and Memory of Deindustrialization in Postwar Scotland.[3]

Coal Country approaches deindustrialization, the declining significance of industrial activities to employment and economic production, as a long-term historical economic process which had foundational cultural and political consequences. It understands the entire lifetime of Longannet power station, and the modernised mining complex which directly fuelled it with coal won beneath the Firth of Forth, as framed by deindustrialization.

Longannet was planned during the 1960s and contextualised by the numerical peak of coal mining job losses. Scottish coalfield employment stood at just over 30,000 in 1970 when the power station began producing electricity, less than half what it had been a decade before. These tens of thousands of job losses were negotiated through moral economy customs that evolved between the management of Britain’s nationalised coal industry and the National Union of Mineworkers (NUM).

Closures were agreed in consultation with union representatives, transfers to suitable jobs were found for miners within travel distance of their homes and suitable accommodations were made for injured, disabled and elderly miners, including the option to retire early in some cases.

These practices evolved over time, originating in responses to sustained closured in the Shotts area of Lanarkshire after the Second World War when the workforce defied Coal Board expectations of mass emigration to collieries in eastern and central Scotland. Instead, a ‘take work to the workers’ policy was pursued by civil servants, including the direction of inward investment in engineering to stabilise the local labour market. This approach was subsequently followed across the Scottish coalfields during the 1950s and 1960s.[4]

Job losses and fears of economic insecurity nevertheless fuelled dissatisfaction. Longannet became a key site in the 1972 strike over miners’ wages when the NUM Scottish Area (NUMSA) mounted mass pickets who clashed with police.[5] A decade earlier, a ‘strong coal lobby’ connected to the Scottish Office had insisted on investment in additional electricity capacity due to concern about sector’s future and employment consequences.[6] Later in the 1960s, the NUMSA responded to mounting colliery closures by becoming a leading proponent of a devolved Scottish parliament within the labour movement.[7]

Longannet strengthened the articulation of a Scottish national coalfield community that overcame traditional parochial associations. Pat Egan relocated from Twechar in Lanarkshire to Glenrothes in Fife so he could take up work at the complex after Bedlay colliery shut in 1982. When I interviewed him in 2014, Pat explained that regional voting blocs in union elections dissipated over time and that trusting relationships were built between men who travelled to work at Longannet each day from Lanarkshire, Fife, Clackmannanshire and the Lothians.[8]

Coal Country confronts the need to understand deindustrialization as a formative structural process and an intensely personal experience whose intricacies determined life courses and remoulded community, class and nationhood. The contraction of Scotland’s coalfields unfolded across the second half of the twentieth century, but its pace was determined by the agency of workers, politicians, nationalised industry managers and civil servants.

Archival records from government, industry and unions provide a detailed vantage on the contingencies that shaped deindustrialization. Oral testimonies are insightful for understanding how workplace closures and job losses were experienced in the coalfields and what these changes came to mean in the twenty-first century.

Earlier this year, Longannet power station’s boiler house was subject to a controlled demolition and the large chimney is set to follow soon. Visible signs of the role coal played in transforming Scotland over the last two centuries are disappearing from the landscape, whilst the energy transition that led to Longannet’s closure continues apace. The Neart na Gaoithe windfarm is under construction in the North Sea near the Fife coast.

Moral economy sentiments and arguments over the responsibility of governments to use Scottish national resources in the interests of communities continue to animate workers’ perspectives. Unions have condemned of the ‘paltry return’ of local jobs and production provided by wind turbine multinational supply chains. The concerns and conflicts which animated deindustrialization in the Scottish coalfields will continue to reverberate in the context of debates over a ‘just transition’ to renewables.

Ewan Gibbs is a lecturer in Economic and Social History at the University of Glasgow. He published Coal Country: The Meaning and Memory of Deindustrialization in Postwar Scotland with the University of London Press and is beginning a BA-Wolfson Fellowship studying energy transitions. You can find Ewan on Twitter @ewangibbs


Cover image: Longannet Power Station 7 December 2011, https://commons.wikimedia.org/wiki/File:Longannet_Power_Station_7_December_2011.jpg [accessed 25 July 2021]

[1] Timothy Mitchell, Carbon Democracy: Political Power in the Age of Oil (London: Verso, 2013).

[2] James Marriott and Terry Macalister, Crude Britannia: How Oil Shaped a Nation Kindle Edition (London: Pluto, 2021).

[3] Ewan Gibbs, Coal Country: The Meaning and Memory of Deindustrialization in Postwar Scotland (London: University of London Press, 2021).

[4] The National Records of Scotland, Edinburgh, Scottish Economic Policy, 4/762, H. S. Phillips, Research studies: geographical movement of labour, 9 August 1948.

[5] Jim Phillips, The Industrial Politics of Devolution: Scotland in the 1960s and 1970s (Manchester: Manchester University Press, 2008) p.126.

[6] The National Archives, Kew, London, Ministry of Fuel and Power, 14/1495, Ministry of Power General Division, TUC and Fuel and Power policy brief for minister’s meeting on 12 February 1963.

[7] STUC, Annual Report 1967–1968, lxxi (1968), 191–2.

[8] Pat Egan, interview with author, Fife College, Glenrothes, 5 February 2014.

read more

‘Since all confess the nat’ral Form Divine, What need to Swell before or add behind?’

47192001

We live in a world often dominated by the latest fashions and prevalent images of body modifications. Whether in traditional print media or on social media sites, women in particular are bombarded with images of often unattainable body shapes, whilst simultaneously encouraged to remain natural in appearance. A curvaceous body type can be obtained by plastic surgery, or alternatively, is now easily replicated with fashion companies selling a wide variety of padded products that can change how our bodies look in clothes. Even though these trends and societal expectations may seem aproduct of the modern age, how true is this?  

In reality, this trend has a much longer history. The eighteenth century saw a variety of extreme fashions introduced by women of the elite. This included wearing large hoops under dresses, bum padding, and even stomach padding to give the illusion of pregnancy. 

Celebrities and social media stars are at the forefront of establishing new fashion trends. But just as the modern media and public are intrigued and obsessed by the fashions and bodily choices of celebrities, so too was eighteenth century society.   Back then, the leaders of fashion were known as the beau monde. Fashion gave elite women a form of empowerment largely unavailable to them elsewhere in eighteenth-century society.[1] This was a sphere they could dominate, arguably giving them a form of pleasure unavailable anywhere else. 

By the 1730s, the Rococo style was deeply entrenched in both French and English fashions, with a focus on the feminine being the most crucial quality of dress for women. This translated into using padding on the hips or hoops to create a new body shape. 

The use of padding to create a curvaceous body shape culminated in the 1780s into a rounded silhouette: hip padding alongside the addition of bum pads or rolls to give the illusion of a more rounded physique, as well as increasing appearance of breast sizes by using starched kerchiefs tucked into the front of the dress.[2] This was not a new development in fashion, however the changing shape of the body was reaching new heights of exaggeration and extremes.

In 2021, we may often see satire and humour directed at those in the public eye on social media – for being too revealing, for having exaggerated bodily features, for all manner of fashion choices. The satirical prints of the eighteenth century did not hold back from attacking women’s fashion either.

Luxury and extravagance were often used as the measure for immorality or downfall in society. This meant fashion was consequently seen as a vice in need of correcting. All manner of vices appeared in satirical prints, and women’s fashion choices were also ridiculed.

The vast number of satirical prints created about women’s fashions suggests that enough women were participating in what the satires considered to be excessive and ‘humorous’ for such satirical prints to be rendered relevant. 

Demonstrating interesting parallels between the past and the present, the satirical print Chloe’s Cushion or The Cork Rump bares a striking resemblance to the famous Kim Kardashian image in Paper magazine where Kim ‘breaks the internet’. However, instead of a champagne glass perched on Kim’s ‘rump’, a tiny dog sits in its place on top of Chloe. 

Matthew Darly, Chloe’s Cushion or The Cork Rump, 1777, col. engraving, British Museum J,5.129 (BM Satires 5429), Wikimedia Commons.

Ridiculing the fashion of wearing padding on the bum links to a similarly themed satirical print, The Bum Shop published by S. W. Fores in 1785. Four women are at various stages of the buying process, with some being fitted for the pads, whilst others admire the final look. 

However, all of the women appear to have ugly faces, and look ridiculous to the viewer whilst wearing or holding the padding, indicating the mocking tone of the print. Extreme vanity is showcased in this print, stating it is a ‘fashionable article of female Invention’; suggesting that it is by women’s choice to dress this way. 

The Bum Shop, pub. S. W Fores, 1785, col. etching, British Museum 1932,0226.12 (BM Satires 6874), Wikimedia Commons.

Society, and especially men, disapproved of these extreme fashions. In some cases, they were angered by women trying to alter their bodies in ‘unnatural’ ways with padding that gave them a different shape and appearance – it was considered to be false, and not a true representation of their natural bodies. 

Most concerningly for men, if a woman could change her body shape she could potentially hide an illegitimate pregnancy.[3] A woman’s sexual reputation both before and after marriage was considered a matter of the utmost importance to elite gentlemen, as fears of illegitimate children inheriting their estates were prevalent in the period.[4]

Considering this social issue, it is therefore understandable why fashion, female sexuality and a sense of female independence seamlessly blended together in the male perception, and thereby became a key target of their concern. 

Women controlled what they wore, making fashion unique as an area lacking domination by men; yet these prints indicate this did not stop men, and wider society, from trying to encourage changes.

The Bum-Bailiff Outwitted: or The Convenience of Fashion, pub. S. W. Fores, 1786, col. etching, British Museum 1851,0901.291 (BM Satires 7102), Wikimedia Commons.

The social scientist Mostafa Abedinifard puts forward the theory that ridicule (and therefore satire) can act as a key tool in society to threaten ‘any violations of established gender norms’. [5] This theory can help explain why satirical prints of women’s fashions were made. In Abedinifard’s words: ‘Through a mechanism involving fear of embarrassment, ridicule apparently occupies a universal role in policing and maintaining the gender order’. [6] By nurturing a fear amongst women of being ridiculed by the rest of society, the prints arguably created an environment of self-policing that encouraged women to stay within gender expectations.[7] These expectations guided women towards remaining natural in their appearance and staying within the private sphere, if they did not wish to be ridiculed.

There are many intriguing parallels between eighteenth century society and our own times in terms of how women’s self-expression through fashion is viewed. Using padding to alter the appearance of certain body parts is not a new phenomenon. In a post-lockdown world where we once again reassess how we clothe our bodies, it is interesting to consider the power fashion held in eighteenth century society, and the responses it generated.[8]

Holly Froggatt is an MA Historical Research graduate of the University of Sheffield. Her research seeks to explore the relationship between satirical prints and the ridicule of elite women, and the expectations they faced in eighteenth century Britain. You can find Holly on twitter @Holly_Froggatt 

Cover Image and Title: William Dent, Female Whimsicalities, pub. James Aitken, 1793, col. etching, British Museum 1902,0825.3 (BM Satires 8390), Wikimedia Commons.


[1] Cindy McCreery, The Satirical Gaze: Prints of Women in Late Eighteenth-Century England (Oxford, 2004), p. xii.

[2] Aileen Ribeiro, The Art of Dress: Fashion in England and France 1750-1820 (London, 1995), p. 72.

[3] Erin Mackie, Market à la mode: Fashion, Commodity, and Gender in the Tatler and the Spectator (Baltimore, 1997), p. 125.

[4] Roy Porter, English Society in the Eighteenth Century (London, 1991), p. 25.

[5] Mostafa Abedinifard, ‘Ridicule, Gender Hegemoney, and the Disciplinary Function of Mainstream Gender Humour’, Social Semiotics 2 (2016), p. 241.

[6] Ibid., pp. 244-45.

[7] Mackie, Market à la mode, pp. 238-241.

read more

Our Island Story

KONICA MINOLTA DIGITAL CAMERA

Why is the history of Britain so often thought to be an ‘island story’? There is, after all, nothing inevitable about islands ending up as unitary states. If Greenland is the largest thing we can call an island only one of the ten largest islands is a unitary state: Madagascar. In the top 50 there are only four more. The United Kingdom is not such a thing, of course: it consists of the island of Great Britain and part of the neighbouring island. In fact the island of Great Britain has never been a self-contained and unitary state and the United Kingdom has not so far lasted as long as the Kingdom of Wessex, and may not do so.

Histories of Britain though tend to tell the story of the development of the UK and the associated development of Britishness as if that was the natural destination of political development. People writing ‘British history’ write about all sorts of other things of course—medicine, the economy, sexuality, art, farming, buildings, witches—but what is considered the proper subject of a ‘History of Britain’ seems to be fixed on explaining the development of the Westminster state.

This conflation of a political community with a geographical object is a very common reflex, evident for example in the oft-repeated ambition to ‘put the great back into Great Britain’, rather than the more accurate but less catchy ‘put the great back into the United Kingdom’. (In fact, it’s a shame there has not been an equally prominent desire to ‘put the united back into United Kingdom’, but that is an issue for another day.)

The ‘history of Britain’ in this sense is closely connected to identity politics in all sorts of ways. In fact, in general, history and identity are intimately related: it is hard to say who we are without talking about our past and when we introduce ourselves we often give a potted history. That is also true of collectivities—there are, for example, landmarks in the past that all Watford football fans would refer to in explaining their connection with the club. More importantly, Brexit seems to have rapidly become a dispute about who we are and what we will be in the future; and equally rapidly that has become an argument about our past, its significant features, who it shows us to be and who is included in it.  

There is more to the island story than identity politics though: an interest in the origins of the UK and Britishness also reflects a more academic interest, of nineteenth-century origin. When history was professionalised European nation states were the most powerful political organisations in human history and the UK was the first among them and the most powerful. The origins of the UK were of more than simply local interest, and that claim for the importance of British history remained common among historians into the 1980s. The world now looks very different though, and questions about the origins of the UK seem less important.  

In my new book I try to suggest an alternative: that we write the political history of Britain as the story not of identity but of political agency. The long chronology—the last 6000 years—is set by a desire to learn from as broad a range of experience as possible, not by the depth of the roots of the institutions of the UK and British identity.  

The book explores the varying ways, over this very long period, people living on the island have used collective institutions to get things done: how that happened, who got to make things happen, and at what geographical scales they have acted.  It is a history of political life on, rather than in, Britain.    

Political power is exercised over our world but also over each other. Our collective institutions give us influence over our material and social world, but also particular people and groups power over others. At the other times though, collective institutions have protected us from what we might call the differential power of one group over others. Much of political history can be understood as the interplay of collective and differential power in the life of our collective institutions. 

Writing this way also helps place British history in a broader geographical perspective. Collective institutions on the island have acted in response to material challenges and powerful ideas and what could be achieved at any particular time depended on what collective institutions were available through which to act. Those ideas and material challenges, and the institutional environment, have very rarely mapped neatly on to the island.  

My approach prompts us to think more carefully about the geographies of political life, adopting a less insular perspective. It explores how developments affecting large parts of the globe affected the island—the rise of empires, the growth of trade, innovations in political thinking—and how people living on the island responded at scales both larger and smaller than the island.  

The question here is not about identity, but who has agency at any given time, in relation to what and on what terms. This issue of agency, and the scales of effective political action, is of course a very pressing contemporary question.

In this sense, it is also a globalised history. The history of political life on the island is part of a history shared with a wider global region, and is often parallel to developments elsewhere.  It is a history of the globe as seen from this place—a history not of how Britain made the modern world, but of how the world made modern Britain.  

Memory is not just about identity: it is also a store of experience. By thinking about agency rather than identity, my book sets out to learn more broadly from the experience of the island’s previous inhabitants, giving us a much fuller perspective on where we are now, and more resources for thinking about what we might do next. 

Mike Braddick is professor of history at the University of Sheffield.  He has written extensively on the social, economic and political history of England, Britain and the British Atlantic.  He is currently working on a biography of Christopher Hill, the great Marxist historian, and on the politics of the English grain trade between 1315 and 1815.

Cover image: The White Cliffs of Dover, courtesy of Immanuel Giel, https://commons.wikimedia.org/wiki/File:Cliffs_of_Dover_01.JPG [accessed 6 June 2021]

read more

50 Years of the Misuse of Drugs Act (1971)

freestocks-nss2eRzQwgw-unsplash

On 27 May, it is exactly fifty years since the Misuse of Drugs Act 1971 (MDA), the UK’s primary legislation for controlling drugs, received Royal Assent.

The Act arranged drugs into a three-tier classification system – A, B and C – with controls based on the perceived relative harm of different substances. Now the legislation is at the centre of a campaign by Transform Drug Policy who are calling for an overhaul of the law which the organisation considers having represented ‘50 years of failure’. 

One of the rationales behind the MDA was to consolidate the existing patchwork of legislation that had developed in the UK since the Pharmacy Act of 1868. This was the first time Parliament recognised a risk to the public from ‘poisoning’ and the 1868 Act distinguished between substances that were ‘toxic’ (poisons) and substances that were both ‘toxic’ and ‘addictive’ (‘dangerous drugs’). 

Some of these so-called ‘drugs of addiction’ were later subject to further controls under the Dangerous Drugs Act 1920 (DDA) which introduced prescriptions and criminalised unauthorised possession of opium, morphine, heroin and cocaine. 

Whilst this did represent a continuation of wartime drug control efforts it was also the result of a racist media-led panic around Chinese opium dens, as well as being a response to international moves toward uniformity on drug regulation. 

The DDA was later clarified by the Departmental Committee on Morphine and Heroin Addiction in their 1926 ‘Rolleston Report’. This formed an interpretation of the Act that became known as the ‘British System’, framing ‘drug addiction’ as a medical issue rather than a moral failing. 

By the 1950s, drugs were becoming increasingly connected in public consciousness with youth subculture and – especially in the tabloid press – black communities and the London jazz scene, stoking further moral panic. 

By 1958, the British Medical Journal observed that the regulations around drugs and poisons were already ‘rather complicated’.[1] This picture was complicated yet further by the 1961 UN Single Convention on Narcotic Drugs which laid out an international regime of drug control, ratified in the UK in 1964 by another Dangerous Drugs Act

Another committee was also formed under the Chairmanship of Lord Brain, ultimately leading to (yet another) Dangerous Drugs Act in 1967 which held onto the principles of the ‘British System’ but introduced new stipulations, such as requiring doctors to apply for a licence from the Home Office for certain prescriptions. 

During the 1960s, drugs continued to be associated in popular imagination with youth, with most attention by 1967 on the ‘Counterculture’ and ‘the hippies’, and in particular their use of cannabis and LSD. That same year, Mick Jagger’s country retreat in Redlands was raided by the drugs squad in a bust that was symbolic of a broader clash of ideologies.

The arrest and harsh sentencing of Jagger, Keith Richards and their friend Robert Fraser prompted William Rees-Mogg’s famous Times editorial ‘Who Breaks a Butterfly on a Wheel?’ on 1 July 1967. This became part of a wider public debate on drug use and on 16 July a ‘Legalise Pot’ rally took place in Hyde park, followed on 24 July by a full-page advert (paid for by Paul McCartney) in the Times calling for cannabis law reform.  

Imaginatively, the Government decided to convene another committee, this time under Baroness Wootton. Its report, published at the end of 1968, argued that whilst it did not think cannabis should be legalised, it should be made distinct in law from other illegal drugs. 

Finally in 1970, Home Secretary James Callaghan introduced a new Bill that was described during its passage through Parliament as an attempt to replace ‘…the present rigid and ramshackle collection of drug Acts by a single comprehensive measure’.[2] But the Bill was as ideological as it was pragmatic, and Callaghan himself had rejected the recommendations of Wootton.

The debates in both the Commons and the Lords indicate that not only did most Members of Parliament who spoke on the subject have little understanding of the complexities of drug use, but also that the theme of the ‘permissive society’ and its supposed excesses was central.

The Bill was approved in May 1971, given Royal Assent the same month and fully implemented after two more years. The Act also established the Advisory Council on the Misuse of Drugs (ACMD), tasked with keeping the drug situation in the UK under review. 

Successive governments have tended to accept the recommendations of the Council but there have been clashes, such as in 2009 when there was a total breakdown of relations when Professor David Nutt, then Chair of the Council, was sacked by Home Secretary Alan Johnson after Nutt had claimed – with substantial evidence – that MDMA and LSD were less dangerous than alcohol. 

For all of this, what has actually been the impact of the MDA? Well, as Simon Jenkins recently pointed out in a blog for the Guardian, 27,000 children and teenagers are now involved in ‘country lines’ drug gangs. Jenkins had previously described the MDA as a law that has done ‘less good and more harm’ than any other law on the statute book.

It is difficult to argue with this. Far from stemming recreational drug use, use of illegal drugs only increased after the MDA and became endemic in cities during the 1980s as heroin became a significant social issue. In 1979, the number of notified heroin users exceeded 1,000 for the first time. 

Over the 1980s and 1990s, drugs like MDMA were also increasingly used to enhance users’ experiences, especially in rave contexts, yet the Government line remained the same. As drug and harm reduction expert Julian Buchanan argued in 2000, ‘two decades of prevention, prohibition and punishment have had little noticeable impact upon the growing use of illegal drugs’.[3]

The MDA also deterred drug users from seeking help for fear of legal repercussions and limited the opportunities of countless young people. Last year, Adam Holland noted in the Harm Reduction Journal that in the UK, drug-related deaths were at the highest level on record and that although enormous time and money has gone into combating the illicit drugs trade, the market has not stopped growing.[4]

Writing thirty years after the MDA, Buchanan had argued that a ‘bold and radical rethink of UK drug policy’ was needed. Such a rethink never materialised. In 2019, the House of Commons Select Committee on Drug Policy concluded that ‘UK drugs policy is failing’. Now after half a century it might be time for real radical change, and the anniversary presents a great opportunity for this conversation to gain momentum. 

Hallam Roffey is a PhD Candidate in the Department of History at the University of Sheffield. His research looks at the idea of ‘acceptability’ in English culture between 1970 and 1990, examining changing attitudes around sexually explicit imagery, violent media, offensive speech and blasphemy. You can find Hallam on Twitter @HallamRoffey


[1] John Glaister and Edgar Rentoul, ‘The Control of the Sale of Poisons and Dangerous Drugs’, British Medical Journal (1958;2), p. 1525.

[2] House of Lords debate (October 1969), Hansard volume 790, cols 189-90.

[3] Julian Buchanan and L. Young, ‘The War on Drugs—A War on Drug Users’, Drugs: Education, Prevention, Policy 7:4 (2000), pp. 409-22.

[4] Adam Holland, ‘An ethical analysis of UK drug policy as an example of a criminal justice approach to drugs: a commentary on the short film Putting UK Drug Policy into Focus’, Harm Reduction Journal 17:97 (2020).

read more

‘Violent affections of the mind’: The Emotional Contours of Rabies

Rabies pic small

Living through the Covid-19 pandemic has more than drummed home the emotional dimensions of diseases. Grief, anger, sorrow, fear, and – sometimes – hope have been felt and expressed repeatedly over the last year, with discussions emerging on Covid-19’s impact on emotions and the affect of lockdown on mental health.

But emotions have long since stuck to diseases. Rabies – sometimes called hydrophobia – is a prime example.[i] In nineteenth-century Britain, France, and the United States, rabies stoked anxieties. Before the gradual and contested acceptance of germ theory at the end of the nineteenth century, some doctors believed that rabies had emotional causes.

For much of the nineteenth century, the theory that rabies generated spontaneously jostled with the one that held that it was spread through a poison or virus. The spontaneous generation theory stressed the communality of human and canine emotions. Rather than contagion through biting, emotional sensitivity made both species susceptible to the disease.

A sensitive person prone to emotional disturbances was considered particularly at risk from external influences that might cause rabies to appear. “Violent affections of the mind, operating suddenly and powerfully on the nervous system” could in rare cases lead to rabies or, at the very least, exacerbate the symptoms in nervous patients, according to Manchester physician Samuel Argent Bardsley (who was more commonly known for promoting quarantine as a way of containing the disease).

For one Lancashire man, John Lindsay, the difficulty of feeding his family drove him to anxiety and despair, exacerbated by a bout of overwork and a lack of food. Fatigued, suffering from headaches, and fearing liquids, Lindsay remembered being bitten by a supposed mad dog some twelve years previously. Amidst violent spasms, visions of the black dog “haunted his imagination with perpetual terrors” and made recovery seem “hopeless.” With reluctance, Bardsley concluded that this was a case of spontaneous rabies. Emotional distress and an overactive imagination had caused and aggravated the disease.

During the mid-nineteenth century prominent London doctors argued that rabies was closely linked to hysteria and had emotional and imaginative origins, much to the chagrin of veterinarian William Youatt, the leading opponent of theories of spontaneous generation.[ii] In the 1870s alienists (otherwise known as psychiatrists) then lent greater intellectual credibility to theories of rabies’ emotional aetiology. They stressed the powerful sway that emotions and the mind held over individuals, especially in the enervating conditions of modern life.

Physician and prominent British authority on mental disorders Daniel Hack Tuke argued that disturbing emotions and images could create hydrophobic symptoms in susceptible individuals. Referencing Bardsley, and drawing on French examples, he argued that “such cases illustrate the remarkable influence exerted upon the body by what is popularly understood as the Imagination.” The very act of being bitten by a dog and the “fearful anticipation of the disease” was enough to spark rabies , even if the dog was not rabid. Even rational and emotionally-hardy doctors had reported suffering from hydrophobic symptoms when recalling the appalling scenes of distress during the examination and treatment of hydrophobic patients.[iii] 

Tuke suggested that in some cases excitement or other forms of mental, emotional, and sensory overstimulation could activate the virus years after a bite from a rabid dog. He drew on a striking case from the United States, as reported by the Daily Telegraph in 1872. A farmer’s daughter had been bitten by a farm dog when choosing chickens for slaughter. The wound healed and no signs of rabies appeared until her wedding day two months later. The “mental excitement” of this life-changing event brought on a dread of water. After the ceremony she experienced spasms and “died in her husband’s arms.”

Tuke reproduced the newspaper’s view, and more generalized gendered assumptions about female emotional delicacy, that such “nervous excitement” had a profound influence on the “gentler” sex. In this case, her nerves were considered to have been exacerbated by the anticipation of the impending wedding night, which was often framed as an emotionally fraught sexual encounter.[iv]

Dr William Lauder Lindsay of the Murray Royal Asylum in Perth, Scotland, was another prominent proponent of the view that rabies was a predominately emotional disease. The disease, he argued, “is frequently, if not generally, the result of terror, ignorance, prejudice, or superstition, acting on a morbid imagination and a susceptible nervous temperament.” Under the sway of their overactive imagination, an individual could take on “canine proclivities,” such as barking and biting. In classist language, Lindsay argued that rabies showed the influence of mind over the body, especially in the “lower orders of the community.”[v]

The British alienists’ depiction of rabies as a predominately emotional disorder made its way across the Atlantic. In the mid-1870s Dr William A. Hammond, President of the New York Neurological Society and leading American authority on mental disorders, stated that the evidence from Europe suggested that heightened emotions might cause rabies in humans. More generally, New York physicians and neurologists debated whether or not individuals had died from actual rabies or fears of the disease, and discussed how fear might turn a bite from a healthy animal into death.[vi]

The alienists lent greater credibility to earlier theories that rabies anxieties could lead to imaginary or spurious rabies. Tuke asserted that fears of rabies could create an imaginary manifestation of the disease. “Hydrophobia-phobia” demonstrated clearly the “action of mind upon mind,” and was distinct from the “action of the mind upon the body” in those cases when emotional distress led to actual rabies.

Echoing Tuke, Lindsay identified women as a particular vector in triggering spurious rabies. He asserted that they spread rabies fears, as supposedly shown by an Irishwomen in Perth who had frightened her husband into believing he had rabies. For Lindsay, this was a classic case of spurious (or false) rabies, which required the rational and firm intervention of medical men, such as himself, to stamp out. But he felt himself fighting an unstoppable tide. For in America, as well as Britain, the press ignited fears and created spurious rabies in susceptible individuals.[vii]

Lindsay and Tuke believed that rabies could, in some cases, be transmitted by dogs to humans through biting and “morbid saliva.” But some doctors controversially argued that it was a purely emotional disease. Eminent Parisian doctor Édouard-François-Marie Bosquillon set the tone in 1802 when he confidently declared that rabies in humans was caused solely by terror. His observation that individuals were struck with hydrophobic symptoms, including “loss of reason” and convulsive movements,” at the sight of a mad dog provided sufficient proof.

Horror-inducing tales of rabies, fed to children from a young age, created fertile conditions for the development of the disease, particularly in “credulous, timid and melancholic” people. Gaspard Girard, Robert White, William Dick, and J.-G.-A. Faugére-Dubourg developed this line of argument as the century progressed. And the theory had traction. In the 1890s, Philadelphian neurologist Charles K. Mills insisted that rabies was purely a disease of the nerves. Such theories were, however, contentious, and Tuke cautioned against those who asserted that rabies was solely an imaginary disease.[viii]

Nonetheless, these theories cemented rabies as an emotionally-fraught disease and reinforced the dangers of dogs bites: even a bite from a healthy dog could trigger a lethal neurological reaction in the swelling ranks of anxious individuals. 

Dr Chris Pearson is Senior Lecturer in Twentieth Century History at the University of Liverpool. His next book Dogopolis: How Dogs and Humans made Modern London, New York, and Paris is forthcoming (2021) with University of Chicago Press. He runs the Sniffing the Past blog and you can download a free Android and Apple smart phone app on the history of dogs in London, New York, and Paris. You can find Chris on Twitter @SniffThePastDog.


Cover image: ‘Twenty four maladies and their remedies’. Coloured line block by F. Laguillermie and Rainaud, ca. 1880. Courtesy of the Wellcome Collection, https://wellcomecollection.org/works/pysjar4f/images?id=mpqquvrh [accessed 25 March 2021].

[i] Contemporaries sometimes used “rabies” and “hydrophobia” interchangeably to refer to the disease in animals and dogs, but sometimes used “rabies” to refer to the disease in dogs and “hydrophobia” for humans. With the rise of germ theory at the end of the nineteenth century, “rabies” gradually replaced “hydrophobia.” For simplicity’s sake, I will use “rabies” to refer to the disease in humans and animals unless I quote directly from a historical source.

[ii] Samuel Argent Bardsley, Medical Reports of Cases and Experiments with Observations Chiefly Derived from Hospital Practice: To which are Added an Enquiry into the Origin of Canine Madness and Thoughts on a Plan for its Extirpation from the British Isles (London: R Bickerstaff, 1807), 238-50, 284, 290; “Hydrophobia”, The Sixpenny Magazine, February 1866; Neil Pemberton and Michael Worboys, Rabies in Britain: Dogs, Disease and Culture, 1830-2000 (Basingstoke: Palgrave Macmillan, 2013 [2007]), 61-3.

[iii] Daniel Hack Tuke, Illustrations of the Influence of the Mind Upon the Body in Health and Disease Designed to Elucidate the Action of the Imagination (Philadelphia: Henry C. Lea, 1873), 198-99, 207.

[iv] Tuke, Illustrations,200-1; Daily Telegraph, 11 April 1872; Peter Cryle, “‘A Terrible Ordeal from Every Point of View’: (Not) Managing Female Sexuality on the Wedding Night,” Journal of the History of Sexuality 18, no. 1 (2009): 44-64.

[v] William Lauder Lindsay, Mind in the Lower Animals in Health and Disease, vol. 2 (London: Kegan Paul, 1879), 17; William Lauder Lindsay, “Madness in Animals,” Journal of Mental Science 17:78 (1871), 185; William Lauder Lindsay, “Spurious Hydrophobia in Man,” Journal of Mental Science 23: 104 (January 1878), 551-3; Pemberton and Worboys, Rabies, 96-7; Liz Gray, “Body, Mind and Madness: Pain in Animals in the Nineteenth-Century Comparative Psychology,” in Pain and Emotion in Modern History, ed. Rob Boddice (Basingstoke: Palgrave, 2014), 148-63.

[vi] “Hydrophobia: The Subject Discussed by Medical Men,” New York Times, 7 July 1874; Jessica Wang, Mad Dogs and Other New Yorkers: Rabies, Medicine, and Society in an American Metropolis, 1840-1920. (Baltimore: Johns Hopkins University Press, 2019), 150-1.

[vii] Tuke, Illustrations, 198-99; Lindsay, “Spurious Hydrophobia in Man,” 555-6, 558.

[viii] Lindsay, Mind in the Lower Animals, 176; Édouard-François-Marie Bosquillon, Mémoire sur les causes de l’hydrophobie, vulgairement connue sous le nom de rage, et sur les moyens d’anéantir cette maladie (Paris: Gabon, 1802), 2, 22, 26; Vincent di Marco, The Bearer of Crazed and Venomous Fangs: Popular Myths and Delusions Regarding the Bite of the Mad Dog (Bloomington: iUniverse, 2014), 141-47; Pemberton and Worboys, Rabies, 64; Tuke, Illustrations, 198-99; Wang, Mad Dogs, 151-2.

read more

‘Always protest’? Drag Race, Pathé Newsreels, and Subversion in Mainstream Media

Manchester,,Uk,-,August,24,,2019:,Manchester,Pride,Parade,2019.

RuPaul’s Drag Race sells itself, and has been praised, as a subversive television series. RuPaul, eponymous creator of the drag contest gameshow, has stated ‘true drag will never be mainstream. Because true drag has to do with seeing that this world is an illusion’. British judge Graham Norton recently claimed ‘there’s something dangerous about drag still’. Echoing this, a contestant queen from the syndicated British Drag Race enthused that ‘Drag was always a protest, a political statement’. Drag Race, participants and producers alike insist, is inherently subversive because drag necessarily challenges the gender norms of ‘straight’ society.

Drag Race has also become a mass media phenomenon. A niche show in 2009, its 13th series premiered this year to 1.3 million viewers. Interviewed, like any self-respecting A-list celebrity, by the Muppets and toting both a Simpsons cameo and a star on the Hollywood walk of fame, RuPaul is arguably the most famous drag queen in the world. This begs the question, can drag retain a subversive edge in mainstream media?

To consider this, it is instructive to look at one of drag’s first brushes with mass media in Britain. It was during the interwar period that drag first appeared onscreen, chiefly through cinema newsreels. Newsreels – short non-fiction topical films summarising the week’s current events – were included in almost every cinema programme until the 1960s. To leaven the news, they frequently featured variety entertainment; offshoot newsreels such as Pathetone were evencomprised entirely of filmed music hall acts.

A well-established form of music hall repertory from the nineteenth century, drag soon found its way into the newsreel. Bert Errol amazed cinemagoers by changing into high drag before their eyes in 1922. West-End comedian Douglas Byng appeared in rudimentary drag singing innuendo-laden falsetto across the 1930s. A 1937 item covered a police pantomime, with multiple shots of officers putting on makeup and dresses. In 1939, six sailors dressed as fairies sang and pranced before King-Emperor George VI during a naval inspection.

This seems remarkable at a time when populist paper John Bull ran editorials attacking London’s queer men for transvestism, castigating them as the ‘painted boy menace’.[1] From the mid-1920s, men wearing women’s clothes and makeup became tantamount to being queer.[2] In the 1930s, it is estimated 40 percent of Britons went to the cinema once and 25 percent twice or more a week.[3] To make drag palatable for the mainstream, newsreels had to ensure conventional manliness remained unchallenged and any association with queerness was muted.

As such, newsreels usually placed drag in establishment settings. Byng was a fixture of London’s fashionable set, always filmed in high-end venues like the Paradise Club, laughing with elites more so than at them. Likewise, Errol’s wife helped him change into drag, making sure audiences knew he was a red-blooded heterosexual, wig and high heels notwithstanding. The police officers and sailors returned to their uniforms, drag but a brief interlude (the naval fairies lasted but twenty seconds onscreen) from their ‘manly’ public service. Ensconced in marriage, elite society, and ‘masculine’ professions, queens could not truly send up the establishment when they were often performing from the heart of it.

Moreover, newsreels always framed drag as comedy. Ian Green has argued comedy allows latitude for contentious topics. Yet, because comedy resolves in laughter, it curtails earnest critique.[4] David Sutton likewise concludes comedy as a genre is ‘the appropriate site for the inappropriate, the proper place for indecorum’.[5] Comedy is establishment-condoned critique, safely dissipated in laughter. All the above acts, awash with puns and gags, aimed to make cinemagoers laugh, not challenge their gendered assumptions. Far from a challenge to the status quo, then, interwar drag acts could only enter mainstream media as safe entertainment bereft of queer connotations.

This is not to say drag culture could not be subversive. For queer men to wear women’s clothes and attend drag balls was certainly a brave and subversive act in the interwar period, one that provoked the British establishment.[6] The interwar life of Quentin Crisp is representative of the defiant subversion that came from wearing cosmetics.

Yet, as Jacob Bloomfield has shown, drag onstage was not inherently controversial and remained a staple of popular theatre.[7] Similarly, filmed drag acts obviated controversy in order to appeal to the broadest possible audience. In fact, looking at newsreel drag items reveals a legacy of conservatism for drag acts in the mainstream.

The producers of Drag Race would like to make their show the heir to the counterculture of drag balls and gay bars. Yet, in many respects, itis the mainstream heir to newsreel variety acts. Like newsreels, Drag Race is foremost comic entertainment, more inclined to jokes than politics. What little gender discussion there is occurs in the fleeting moments between farcical gameshow skits. The only challenges presented are to the competing queens’ dignities.

Like Pathe’s producers, RuPaul has espoused a profoundly conservative view of ‘true’ drag. Through transphobic comments, he has stressed drag as the exclusive province of gay men. Thus, much as newsreels removed any ‘controversial’ association with queerness, so Drag Race has placed strict limits on what drag represents and who can perform it.  

A look at the history of drag in newsreels reveals that to project drag through mass media is not inherently subversive. Whether in Pathé or on BBC3, being produced as mainstream entertainment severely curtails any potential for real subversion of societal norms such as gender. Former drag performer Paul O’Grady, carping in 2017 about Drag Race, contended that his drag persona Lilly Savage ‘belonged in a pub, especially a gay bar, where you could rant and rave’.  Considering drag’s relationship with popular media, perhaps it is only in niche subcultures that subversion can truly flourish.

Conner Scott is a PhD student in the Department of History at the University of Sheffield. His research seeks to explore the role of British newsreels in everyday life, and how they (re)presented the cinemagoing public to itself on a weekly basis between c.1919-c.1939.


Cover image: Manchester Pride Parade 2019. A group of five drag queens representing BBC’s ‘RuPaul’s Drag Race UK’ on pink stage, Manchester, 24 August 2019. Used courtesy of Goncalo Telo for non-commercial, educational purposes. https://www.shutterstock.com/image-photo/manchester-uk-august-24-2019-pride-1489347011

[1] Matt Houlbrook, ‘“The man with the powder puff” in Interwar London’, The Historical Journal 50.1 (2007), pp. 147-49.

[2] I use the term queer as it was the most common self-identity of interwar men who had sexual and emotional relationships with other men and avoids the anachronism of gay. See Matt Houlbrook, Queer London: Perils and Pleasures in the Sexual Metropolis, 1918-1957 (London, 2005), p. xiii.

[3] Annette Kuhn, An Everyday Magic: Cinema and Cultural Memory (London, 2002), p. 2.

[4] Ian Green, ‘Ealing: In the Comedy Frame’ in James Curran and Vincent Porter (eds), British Cinema History (London, 1983), p. 296.

[5] David Sutton, A Chorus of Raspberries: British Film Comedy 1929-1939 (Exeter, 2000), p. 60.

[6] See Matt Houlbrook, ‘Lady Austin’s Camp Boys: Constituting the Queer Subject in 1930s London’, Gender and History 14.1 (2002), pp. 31-61; Houlbrook, Queer London.

[7] See Jacob Bloomfield, ‘Splinters: Cross-Dressing Ex-Servicemen on the Interwar Stage’, Twentieth Century British History 30.1 (2019), pp. 1-28.

read more
1 2 3 7
Page 1 of 7