Modern History

European history and ‘eurocentrism’ – a conversation between Dina Gusejnova (LSE) and Charles West (Sheffield)


Over the past few weeks, Dina Gusejnova and Charles West have been discussing over email what ‘Eurocentrism’ means for historians studying and teaching European history. What follows is an edited version of their conversation.

There’s lots of conversations going on at the moment about ‘eurocentrism’, and how it relates to the study and teaching of European history, both in schools and in universities (see for instance the project Why Europe, Which Europe?). I take eurocentrism to mean the conceptual privileging of Europe over the rest of the world, as if this part of the world’s history were intrinsically more important than anywhere else’s, and can serve as a universal benchmark for measuring progress.

But it’s increasingly common to hear people saying ‘eurocentric’ when they just mean ‘focusing on the geographical region of Europe’. This seems to me to be an unhelpful slippage. Studying the history of the geographical region of Europe is not in itself eurocentric. It depends how you do it. This might be a point that hardly needs to be made, but still: there are lots of good, non-eurocentric reasons to study, and ways of studying, European history.

And that’s all the more self-evident for people living in Europe. Place matters in history writing, because the world necessarily looks different depending on where one looks at it from: none of us has a God’s eye view of the world, and we need to remember our own positionality. So I see it as neither surprising nor intrinsically problematic if European history features more strongly in European countries’ school and university curricula than elsewhere, for example in India or Kenya or the United States (though of course other geographies and scales of history are and must be studied and taught in Europe too, for all kinds of important reasons: indeed I’ve contributed to this in a small way myself). I wondered what you thought about this as an historian of modern Europe?

First I’d like to say it’s really great to be able to return to a subject which, I recall, we last talked about on a work trip to India. The purpose of this trip was to foster institutional connections with universities in India, which provided a relevant context for the discussion.

Colloquially, overcoming ‘eurocentrism’ often means reducing the study of European history as such. The use of the term is often linked to demands for decolonisation, but in some sense, all historians need to think self-critically about their practice, whether they are historians of Europe or any other part of the world.

If the critique of eurocentrism has any constructive meaning for historical research and teaching, it is as a critique of a certain petrified view of modernity. As Dipesh Chakrabarty put it in Provincializing Europe in 2000, eurocentrism is a philosophy of history which ‘goes to the heart of the question of political modernity in non-Western societies’ by imposing a ‘“first in Europe, then elsewhere”’ mode of thinking about historical time. This mode of thinking was pernicious not only because it placed Europe at the centre, but also because it affected approaches to the non-western world, where those under its spell were prone to ‘replacing “Europe” by some locally constructed center.’ Histories of modern political economy have always been linked to the study of the moral sciences, looking at issues such as agency, complicity, and the eternal question of who benefits. Depending on the epoch of study, ‘European hegemony’ emerged through the expansion of a limited set of European powers and transnational actors, including the Catholic Church or such corporations as the Jesuits, business ventures like the East India Company, the Hanseatic League, etc.. It could also be seen as a product of the globalisation of the trade in people and commodities, the exploitation of labour and many other dimensions of the story. It is worth pointing out that Europeans were neither the sole beneficiaries of imperialism, colonialism, and associated forms of hegemony, nor were they always the main agents of this process. Like other groups of populations in the world, Europeans today are in some respects products of these processes. In fact, according to Marx, the agent of modernisation is capitalism itself – an abstract force with destructive power. Whichever story about modernity one takes on board, what matters for historians of modernity is that the very idea of Europe as a subject has been one of the by-products of modernisation, and this explains how the charge of ‘eurocentrism’ has been deployed rhetorically in the past.

At the time of Chakrabarty’s book, around the year 2000, many American historians were beginning to think of themselves as the ‘last Eurocentric generation’. Others who were working on areas outside Europe at the time still felt ‘stranded at the discipline’s periphery’. At one level, a lot has changed in the intervening twenty years when it comes to the geographical ‘provincialisation’ of Europe. The study of non-European history has been valorised more than before and uncoupled from European centres (though perhaps not enough) at European and American history departments, journals and institutions. In the early 2000s as an undergraduate in Cambridge, I witnessed this even in the physical transformation of intellectual spaces around me, as I cycled past a building where the words ‘Oriental Studies’ were quietly erased and replaced with ‘Faculty of Asian and Middle Eastern Studies’ – though of course the very assemblage of such large geographical areas in one subject group still bear the structural traces of empire. Faculties such as History and Classics may not have been visibly affected to the same degree, but in these disciplines, too, the changes of the past twenty years have been profound. Many grands récits of modern history now have a tone of greater humility when it comes to Europe’s place in the world. Such specialists in non-European history as Kenneth Pomeranz or Jürgen Osterhammel, who have both established their intellectual authority by working on China rather than Europe, have argued influentially that the ‘great divergence’ between Europe and other parts of the world in terms of wealth and growth was due to fortuitous factors such as the geographical distribution of coal; they used comparative and partly counterfactual arguments which have opened up a range of new approaches which contextualise Europe within modernisation processes in global history. In 2004, Chris Bayly, the preeminent historian of the British empire, famously announced that ‘all local, national or regional histories must, in important ways, therefore, be global histories. It is no longer really possible to write “European” or “American” history in a narrow sense’. One historian has recently described ‘area studies’, which was one domain of research where non-European histories emerged in the twentieth century, as a European-dominated ‘struggle for world knowledge’ (in a quip to Fritz Fischer’s critique of Germany in WWI, “The Struggle for World Supremacy”). In short, in the discipline of history, recent historiography has been correcting the representation of Europe in its domain in ways similar to the revisions which geographers have introduced to a Mercator map: Europe has been recontextualised, adjusted proportionally to its significance, moved out of focus, indeed, provincialized, if you like.

I find it remarkable that the critique of ‘eurocentrism’ is not only still prominent in meta-historiographical discourse twenty years after its latest emergence in history, but that the critical meaning of the term has become reduced. It now usually implies the idea of valuing European history over that of other geographical areas or populations, not, as in Chakrabrarty’s interpretation, for instance, teleological thinking as such. To name only one example, in 2020, in a recent Opinion piece for the New York Times, the historian and theorist of postcolonial political thought Adom Getachew insisted that ‘a Eurocentrism that valorized European civilization as the apex of human achievement’ has been a mainstay in academic culture, adding that elements of nineteenth-century imperialism continue to resonate in the anti-immigrant politics in the EU.

This kind of critique of eurocentrism has a three-fold direction: it is first, directed against academic and public history which has marginalised or exoticised research on geographical areas outside Europe; secondly, it is a critique of historical pedagogy and practice, which affects the ideas and self-valorisation of a much larger circle of people; finally, it is an institutional critique of inequalities in the modern world.

Generally, I think it is a welcome phenomenon that we Europeanists have to think more self-critically about our subject. But I also think that the concept of eurocentrism, if it is to be productive in critical pedagogy and research, cannot be projected outward on an imaginary discrete ‘eurocentric’ other. There is still a lot of work to be done within the historiography of Europe itself in unpacking the emergence of different ideas of Europeanness, and there is also a need for greater contextual and comparative work on a global as well as a local scale. The continent has its own Mercator-like distortions, which make some nations appear larger or more sharply than others. This is also true of urban versus rural histories, etc.

Let me give some more practical examples of the way I use critiques of eurocentrism in my teaching experience in my undergraduate course on interwar cultural history (Interwar worlds: the cultural consequences of the First World War). By contrast to the political history of the First World War, which has become more global in orientation, the historiography of interwar culture remains profoundly tied to a few familiar themes in European or North American history, such as British anti-war poetry, Weimar or Soviet culture, or the jazz age in France and the US. Between 1919 and 1935 thinkers such as Oswald Spengler and Edmund Husserl acknowledged that the war had caused to think about European civilization as finite (Spengler) and have admitted that the classic teleology of Philosophy itself culminating in the creation of a ‘European humanity’ at its summit was, in the interwar period, in a profound crisis (Husserl). Yet in the work of historians, for a long time, the master narratives of such cultural transformations remained – well, not only Eurocentric, but centred around the classic European empire-nations, France, Germany, or Russia (the Soviet Union). This is nowhere more palpable than in studies of war memory, where figures like the doyen of French national historiography, Pierre Nora, loom large.

When it comes to designing research areas for students, one response to the charge of ‘eurocentrism’ might call upon historians to dismiss eurocentric studies such as Pierre Nora’s influential conception of ‘lieux de mémoire’ altogether – like the Algerian postcolonial thinker Seloua Luste Boulbina has done in an open letter to Pierre Nora. An alternative option is to reinterpret the whole idea of universalism that is inherent in French national and European history at large, and to examine it as a mode of claiming power that is available to different groups in history. This is something the Senegalese philosopher Souleyman Bachir Diagne has described as ‘horizontal’ or ‘lateral universalism’, a work in progress, with its eurocentric sting taken out. In that interwar cultural history course, I am closer to Diagne’s view of things. For instance, I tend to encourage students to use this Francophone literature to explore the memorialisation of the war in contexts such as the British Mandates in Africa and the Middle East, emphasising the circulation of memorial designs between different regime types and their different uptake in society; or they can engage with established studies of Soviet or Weimar culture and concepts such as ‘cultural revolution’ (which was itself originally taken from its Chinese context to examine Soviet history) to look at new vernacular movements or modern media in interwar Turkey, China, or Japan. In other words, I don´t think it is productive to start entirely from scratch or ‘write out’ the specific biases that have come to exist. Each research question requires thinking on one´s own feet and reinventing one´s methodological toolkit. The course is intentionally designed around an open question about the war´s cultural consequences, and any historiography is examined in a critical light and used in a modular fashion to expand our horizons.


I agree that the charge of ‘eurocentrism’ might have special implications for historians of modern Europe, as you suggest, since the concept is so tied to that of modernity. But I also think it’s something that medieval specialists need to think about too. And of course they have, for instance by expanding their geographical horizons to think about the wider world, and by collaborating with experts on other parts of that world, including scholars institutionally located in the contemporary global south. And the debates continue as to whether ‘medieval’ is a category which only applies to European history, or whether it can and should be applied elsewhere too (both options can be labelled as ‘eurocentric’, after all). These developments are positive and valuable. Yet as I suggested at the beginning, I would argue that there’s a lot that can be done (and has been done) to tackle eurocentrism within the study of ‘medieval’ Europe as well.

The teaching dimension you highlight is really important: after all, this is how we communicate the priorities and shape of the discipline we work in to the next generation. Eurocentrism is something I’ve thought about (and discussed with students) in the context of in my third-year special subject course. The course is focused on 9th-c. Francia, so on lands now divided between half a dozen modern European countries (France, Germany, Belgium, Luxembourg, the Netherlands, Austria, Italy, Spain). In a geographical sense it’s obviously a European history course. And there was a concept of Europe in the Middle Ages too, as Klaus Oschema has recently emphasised. The Franks had an idea of Europa. However, with a few interesting exceptions, they chose not to use it as a frame of reference or context; it wasn’t very important to them. Still, its very existence is significant in all kinds of ways, not least in that it means we shouldn’t blindly impose our concept of Europe upon a period which had its own. That’s an important nuance. We need to avoid seeing the Carolingian empire as a proto-EU, as Marie-Celine Isaïa has pointed out, as if the idea of Europe is unchanging or timeless, but nor should we imagine this idea just popped into existence in the 19th century. Neither is accurate. Historicising the concept of Europe, showing how the concept has meant different things at different times, is a key step in battling eurocentrism, and this is something which medieval history courses can contribute towards.

At the same time, the history of Europe doesn’t have to be only a history of the concept. It’s legitimate also to think about the history of the geographical region we now think of as Europe. In particular I’d suggest it’s important to include perspectives from al-Andalus, and to ensure these are included in the study of Europe as a region. I’ve worked them into my teaching for this very reason – not simply for the sake of it, but because this is relevant history. Al-Andalus is often tacitly sidelined by earlier medieval historians who aren’t specialists, not least for linguistic reasons since its textual records are mostly in Arabic not Latin. Plus, it was more culturally integrated with the Islamicate world than with the Christian lands to the north, and Amira Bennison has showed that Andalusians didn’t usually think of themselves as ‘European’ (though contemporaries to the north occasionally did). But then, as I’ve just said, the same is true of the Franks. This emphatically doesn’t mean Al-Andalus isn’t part of European history, and there’s a responsibility to make that clear. If we don’t point things like this out, which we can only do by teaching European history, then old framings will be left untouched and unchallenged. And it’s interesting and important, to come to your point about not quietly removing but critically interrogating the European historiographical legacy, for students to consider why Al-Andalus, and for that matter Muslim Sicily and even Christian Byzantium, has often been tacitly excluded from histories of Europe – and how putting them back in changes the picture.


The topic of tacit exclusion highlights an important aspect of the problem we are discussing, namely, that the slogan of ‘provincializing’ Europe and terms such as ‘eurocentrism’ have been used out of context as tools for choosing what to study (or rather, what not to study any longer). This selective appropriation of terminology obscured the fact that these critiques were mostly focused on the question of how one studies phenomena, how things are contextualised and narrated. The real problem which limits historical research is the adherence to any kind of ‘centrism’ or ‘teleology’, which is often the consequence of an intuitive attempt to relate all unfamiliar phenomena to certain familiar brands of historical events, or assume that by covering the history of, say, urban environments, one has already subsumed the rural, and so on. Such systemic oversights, sometimes modelled around potted national histories of different European states, can be as damaging to a historian as it might be to a political campaigner who never leaves the remits of her home district. It is tempting to stick to path-dependent accounts of special national histories, particularly of European states, such as the history of National Socialism and the rise of the Third Reich, or Soviet History, where ‘centrism’ has led historians to explain the history of the Third Reich only by looking at Germany (thus missing, for instance, the fact that more than five of the six million Jews killed in the Holocaust came from Nazi-Occupied Europe) or looking at the Russian Revolution and Civil War without paying attention to the history of, say, the fortunes of social or liberal democracy in the Muslim Caucasus.

I doubt that manifestos in themselves actually have a decisive impact on historical research. They may be nothing more than flags which blow according to the winds of change caused by other factors around them. For instance, to pick a few examples from a range of subjects in modern European history, one of the most illuminating accounts of the Russian revolution from the point of view of Russia’s non-Slavic peoples was published in 1972; it is Ronald Suny’s study of the Baku Commune. Hannah Arendt’s Origins of Totalitarianism, which emphasises the common genealogies of modern extermination policies in Nazi Germany and Stalin’s Russia in colonial violence, dates back to 1951. Aby Warburg’s art historical studies of the ‘afterlife of antiquity’ and his provocative Memory Atlas, in their own day, provoked the European establishment by giving equal weighting to ancient and contemporary expressions of feeling, to European and non-European cultures, was a profound challenge to many disciplines – in the 1920s. Simon Dubnow’s Jewish histories, produced in the years between the end of the Russian empire and the Molotov-Ribbentrop pact dividing eastern Europe between Germany and the Soviet Union, could be seen as examples of early subaltern studies. All this was produced years before the terms eurocentrism or such like were coined. Dubnow’s work could be seen as a way of valorising the history of a marginal people inhabiting so-called Pale of Settlement, the underdeveloped and impoverished western hinterland to which the European Jews were confined under the Russian empire and where many Jews still lived until the extermination of Eastern European Jewry in the Holocaust – including Dubnow himself, who was killed by a Gestapo agent in 1941. Looking at this aspect of European history, the term ‘hegemony’ hardly applies. In short, overcoming ‘centrism’ might look different for histories of modern racism or histories of modern nationalism and dictatorship, and it is certainly not something where one generation of historians can necessarily pat itself on the back for being more historically engaged. Others in the past have taken far greater existential risks in doing this kind of work.


I think this ‘centrism’ of various kinds is a problem in medieval history too. There’s a rich tradition of social, and Marxist, historiography, but I wonder how many medieval European survey courses explicitly make space for the peasantry, who formed the overwhelming majority of the population, and whose production structured the economy, and thus funded all the more glamorous things? Of course there is the added problem that it’s the centres which are not only best documented, but most accessible to students too through translation of the narrative sources, which tend to privilege rulers and elites (with the great exception of Icelandic saga evidence). Thankfully the ‘special national histories’ is less of a problem for the early medievalist these days, though nationalists do often turn to the distant past for the most deeply-rooted authentication of their projects (and I fear we may see more of this in an English context).

But these sorts of problems aren’t pedagogically insuperable, and historians of Europe have some experience in engaging with them. I’d suggest that an anti-eurocentric European history pedagogy might involve explicitly demonstrating how European history can be read as undermining the triumphalist 19th– and 20th-century teleological narratives of European modernity which positioned that history as the universal benchmark, for instance by underscoring the relatively peripheral nature of the western Eurasian peninsula for most of recorded history. It might involve showing that Europe has always been entangled with the wider world, both when it seemed in a dominant position but also before and after; and crucially, that it has moreover been fundamentally shaped through that entanglement (here I think of Saba Mahmood’s brilliant critique of Charles Taylor’s account of secularism, highlighting its assumption of an ideologically hermetically sealed Europe). Those entanglements need to be understood as forming part of European history too. Take for instance monasticism, a key social movement and intellectual matrix in Europe, which was initially appropriated from an Egyptian set of ascetic practices, and whose western variant remained strongly influenced by eastern Mediterranean culture. An anti-eurocentric European history needs to have very porous boundaries, because flows over those boundaries have often been of fundamental importance. As I’ve already suggested, it might further involve demonstrating that the idea of Europe was itself historically produced, has never been simply just ‘there’, and has changed its connotations over time. As has often been pointed out, for instance, Europe was not a place or a context that mattered much to the ancient Romans. It might involve revealing and underscoring the human diversities that in different ways have always characterised Europe’s history, putting the lie to any idea of a homogenous ‘white’ European past, whilst doing justice to the processes of exclusion which have often been directed, often cruelly, at these diversities, as set out in R.I. Moore’s concept of the persecuting society, in which the centre defined itself through and against the margins. It should be stressed that Europe has always been a culturally plural region, though not always peacefully so, as you mentioned earlier. And this approach might involve using comparative approaches, partly to highlight how European history has never embodied the whole world’s history.

In all these aspects, I believe historians looking at the Middle Ages have a significant contribution to make. After all, Kathleen Davis has shown how representations of the Middle Ages were central to European conceptions of (and thus interventions in) the wider world in the nineteenth century. Rethinking the European Middle Ages critically thus destabilises eurocentric analyses from within, so to speak. Approached this way, the study of European history can perhaps not only escape the trap of eurocentrism, but contribute significantly to springing it. Eurocentrism is fundamentally a problem of historical method, not of content. But historians of Europe, including those looking at its more remote past, may nevertheless have a necessary role to play in dismantling it at source. Ignoring or downplaying European history, especially in European pedagogical contexts, might be done for the right reasons, but, in leaving older narratives intact, have all the wrong results.


So far we have spoken about ‘eurocentrism’ as a timeless concept for interrogating pedagogies, particularly those related to courses in modern and medieval European history. But I think it is worth exploring in historical perspective how and when the critique of eurocentrism itself has emerged. The term ‘eurocentrism’ dates back to a specific moment in European and global history and in some ways remains restricted by it. It actually came into circulation in France, coined by the Egyptian-French political economist Samir Amin in 1988. Related critical terminology emerged around the 1970s and 1980s in the United States and in Britain, coined by Edward Said at Columbia, Ranajit Guha and the Subaltern Studies group at Sussex, Teodor Shanin and his studies of the sociology of the global peasantry at Sheffield, and others. One could also throw books such as Martin Bernal’s Black Athena (1987) into this mix. These critiques of European hegemony emerged at a time of globalisation and also at the height of the Cold War, and reflect the circumstances of this dual moment. What the above-mentioned authors had in common was their critical reflection on the structures of the bipolar world order and its critical shadow, the non-aligned movement, reflected in terminology such as the ‘Third World’, coined in the early 1950s by another French intellectual, Alfred Sauvy. Their use of the term ‘eurocentric’ entailed a critique of capitalism which, however, remained distant from orthodox or Soviet Marxism. They were influenced by critical readings of Hegel, and of Gramsci’s Prison Notebooks, which, though written in the mid-1920s, were only translated into English in the 1970s, and which focused on the concept of ‘hegemony’ and the question of subaltern subjectivity. They were also observing contemporary events such as the peasant-driven revolutions of the non-western world. Even though these authors used terms such as ‘development’ and ‘Third World’, they also felt trapped by them. Teleologies came under attack not only from scholars like Amin and Said, who sought to draw attention to the significance of non-European and especially non-Christian civilizations, but also Shanin, who engaged critically with more Eurocentric Marxist conceptions of revolution such as those offered by Eric Hobsbawm.

Forty years from the minting of ‘eurocentrism’, it is time to re-evaluate the circulation of this coin in the context of modern political economies. We have discussed how ‘eurocentrism’ is used as a critique of an ideology which justifies Europe’s dominant position within the global capitalist world system, and as such coupled with a demand to reduce the proportion of attention given to the history of Europe in the teaching of modern history. But even more problematically, the critique of ‘eurocentrism’ often goes hand in hand with the demand to reduce the history of the pre-modern world altogether. I think both dogmatic interpretations of the critique can be very damaging to the discipline.

As a modernist, I am very aware that it is historians of pre-modern periods who have pioneered a variety of methods designed to help bring to live the past lives of those not recorded in institutionalised or written histories, including intellectual history sensitive to marginal voices, oral history, discourse analysis, studies of material culture; they have examined plural ideas of development, contradictory beliefs, forgotten ideologies. There has been an expansion of valid sources for historians together with an expansion of methods to pursue questions about the past (hence the various ‘turns’ since the 1970s). Knowing how to draw on records such as inquisition protocols to tell the history of its victims comes in handy when dealing with modern histories of oppression and persecution, whether it is by the KGB, the Stasi, or Pinochet’s regime. It would be foolish to, say, diminish the place of French historiography by following a zealous attempt to remove European components because in this context. In using ‘eurocentrism’ too fanatically as a tool, one risks throwing out many methodological riches which have accumulated in this domain. The same goes for other peculiarities of European historiographies.

The 20th-century critiques of eurocentrism were not absolutely new, but rather, updated versions of a range of critical positions towards the West which themselves go back much further. They appear in the European Enlightenment, in Russian discourses of anti-westernism from the 19th century, in German mid-twentieth century anti-Westernism, in the anti-westernism of the Ottoman Muslim world, as well as in the political thought of much of South East Asia. In the two intervening centuries, this question as such has often faded from view due to a range of factors, including the rise of nationalism as well as new forms of imperialism which were coupled with the rise of racial science that justified past colonial interventions by drawing up systems of difference. In this context, in Europe itself, history became entrenched as an academic discipline at leading universities, many of which served not only universal but also national purposes – most prominently, in France, with a new, more meritocratic system of higher education promoted by Napoleon. By the mid-19th century, one answer to the original Enlightenment question regarding the cosmopolitan purposes of history writing was given by Hegel, who could be described as a liberal ideologue for the Prussian state. His answer in the Philosophy of History essentially pitted European history as the focus of universal history, because, as he saw it, it was in and through European history that the ‘world spirit’ manifested itself. It was only in the aftermath of the Second World War that the original Enlightenment preoccupation with cosmopolitan purposes returned to the agenda of public discourse, shaped not least by institutions such as UNESCO but also from within the universities. Here is where ‘eurocentrism’ came into view. It is undeniable that Europe – and I mean in the first instance, western Europe – acquired a dominant position as an object of historical enquiry not only due to the role of some European powers and groups in the political and economic history of the world but also, more narrowly, because of the historical dominance of Germany, France, Britain and the European influence on the US within the modern university system. Unsurprisingly, the critique of this hegemony first emerged in the very institutions which have been shaped by it.

What neither the Enlightenment cosmopolitans nor the present-day critical thinkers like to discuss is the complicity of the elites of colonised countries in processes of colonisation, and the sheer varieties of racism, internal colonialism and slavery within Europe itself. This is a point forcefully made by Frantz Fanon in his critique of the ‘national bourgeosie’ in his 1961 classic Wretched of the Earth.  Terms such as slavery, racism, and Orientalism, have their place within the relationship of Europeans with other Europeans as well, and I see it as one of my tasks as an historian of Europe to remind students about this. To explore this in a more multi-directional way, there is now a formidable Oxford Handbook of the Ends of Empire (2018), with a chapter on China ‘from Manchu to Mao’ by our Sheffield colleague Tehyun Ma, work by Joya Chatterji on decolonisation in South Asia, decolonisation in Eastern Europe by James Mark and Quinn Slobodian, my LSE colleague David Motadel on transnational aspects of Islamic movements against empire, and many others. For Eastern Europe, there is much to learn here at the level of theory and research design from historians such as Alexander Etkind and his students. I wonder how these themes of empire, imperialism, and internal colonisation play out in the context of medieval history, with the shadow of the Roman empire and its diverse legacies lingering on.

While 19th-century empires were different from preceding forms of empire, empire itself was obviously not a modern or indeed a European invention. Parts of early medieval Europe can in some ways be considered a post-imperial set of societies (though not of course Byzantium), and medieval historians have extensively studied colonisation processes within Europe. And there’s a huge and exciting body of work about ethnicity and, more recently, race in the Middle Ages, often led by medievalists of colour. This latter body of work isn’t without its critics – Vanita Seth’s recent piece in History and Theory is important here – but the point that many of the analytical tools often used to describe European involvement with the wider world have purchase on Europe itself is crucial. Treating Europe and the wider world differently from a methodological point of view can be just another, more subtle form of eurocentrism.


To wrap up: the question how historians of Europe should somehow change their practice of research and writing in the light of such critiques does not have a self-evident answer. What does it mean to provincialize Europe in historiographical practice, and in what sense does revalorizing the non-European world depend on devalorizing the idea of European civilization? As soon as you start thinking in these terms, you will find the implications of ‘eurocentrism’ as a term to be very prescriptive and the historical accounts they are based on misleadingly reductive. Many history departments in the UK have recently ‘globalised’ their modern history courses. But even here you have many possible paths for implementing such an agenda. In designing course readings and supporting students´ independent research projects, I see broadly two options. Either I simply remove or reduce readings focusing on Europe. Or I let students work through them and deal with their various imperfections and shortcomings before starting their own explorations. My sense is that students generally are susceptible to manifestos of progress, they want to land on the right side of history, and any promise of shortcuts in this direction is therefore highly appealing. I’ve already written elsewhere about a tendency lately to divide up past thinkers into ‘purely’ progressive or ‘purely’ reactionary figures, by the standards of our day, which, in my view, can only lead to a shallow and self-serving celebration of the ideas of one’s own generation. Or take another example. What should students from China learn when they study the history of European racial science and its ties to colonial governance? The easier path is to dismiss this history as a problem of the West, to embed it in a political language of anti-westernism. It is far more arduous to think of the ways in which similar processes might be, or might have been, occurring in China itself. Yet, in my view, it is the arduous path that has more potential to lead to new critical histories of modernity, precisely because it does not culminate in the certainty of what it means to be on the right side of history. Students can be selective in exposing ‘eurocentrism’, but in fact they are as unfamiliar with the geography and politics of Eastern and Southern Europe as they are with the distinction between socialist and capitalist-aligned African states during the Cold War. Pitting the study of one against the other because one is supposedly more European than the other misses the point: what is needed is contextual knowledge of modern history, the ability to pinpoint the relevance of one’s local case study in a global framework.

Another way to think about this is that genuine historical inquiry itself rarely starts from narrative. Rather, the story comes at the end. Often the questions take root when you read a text or a document from the past and enter into a dialogue with it. Take, for instance, a historical text such as Max Weber’s (unfinished) study of music as a case study of rationalisation, supposedly a linchpin of western modernity. An intellectual historian who studies this text today will rightly see it as a work of political thought on Europe and the West – but in the 1950s the text would have been contextualised as a case study in sociology as such. In this sense it would be beneficial to ‘provincialize’ Max Weber, but certainly not excise him from the canon. For all its shortcomings, it was Weber’s status and later, the canonical status of his works, which gave recognition and visibility to non-western musical systems. Moreover, one could look at Weber’s intellectual encounters with W.E.B. DuBois, for instance, in the light of which a re-examination of such a European and decidedly eurocentric text could lead to a productive investigation around the use of musical notation and oral tradition as a source of political thought. This also brings me to related concern raised by many critics of ‘eurocentrism’, which is the demand to reduce the study of canonical thinkers and approaches. Yet some canonical histories have historically been a great bridge introducing underrepresented topics and people to the academy.

I’m sure that the term ‘eurocentrism’ has produced a lot of constructive debates in the past, and the term has become a natural part of our vocabulary, but the sort of disciplinary self-criticism that is needed today should transcend a narrow use of such terminology. It could instead take into account a question that has been debated since at least the Enlightenment, and some landmark interventions by authors such as Kant and Herder (1784), namely: What does it mean [for historians of Europe] to write history with a cosmopolitan purpose? I have tried examining these sorts of questions myself in conversation with colleagues in a volume I edited, called Cosmopolitanism in Conflict.

It is all the more important given that the decades of globalisation have redistributed power geographically – not in the sense of an actual social redistribution of wealth, but in the sense of co-opting more geographical areas as sites of power. This means that the holders of financial capital or power and their locations are no longer as visibly ‘European’ as in the 1980s. Secondly, the globalisation of the university sector, particularly in Britain and the United States, has created a mixed global population of students and academics. But the financial structure particularly of British universities is such that they are only accountable to make provisions for socially inclusive teaching (i.e. support for students who cannot fund themselves) on a national level, which means that students from places such as the ‘Global South’ or what is now called the ‘Global Majority’ tend to come from much wealthier social backgrounds. The problem of the asymmetry of class in the representation of students outside Britain in my view is often being overlooked. All this has implications for the way in which historians might self-critically reflect on the future of their discipline in research and teaching.

I would like to go back to the Herderian terms of asking the question what it might mean to practise history in a cosmopolitan sense, i.e. in the interests of all humanity, but also add the dimension of social diversity to this agenda. Historical research is among other things also a process of communication. What is needed is a framework which enables scholars and researchers to form a dialogue with multiple local, national, and global communities – the opening up of universities for this kind of conversation beyond their competitive market relations and rankings. Interestingly, one of the few positives of the pandemic has been the provision of exciting opportunities for just that.

Whatever the problems of modern history are, using the term ‘eurocentrism’ to impose modes of thinking on others strikes me as an unnecessary kind of puppeteering. It is productive when used as an invitation for a conversation, but good historical research comes from thinking about things for yourself, and from open encounters with other minds, including unsavoury characters. Historical research as I see it is not particularly suited for resolving problems, it is there to involve us in them, however uncomfortable the insights.

Charles West is a Reader in the Department of History at the University of Sheffield, where he’s taught since 2008. Current research projects include a collaborative Anglo-German study of local priests in tenth-century Europe, and a general history of eleventh-century Europe, under contract with OUP. His most recent publication is on early medieval ideas of the secular

Dina Gusejnova is an Assistant Professor at the Department of International History at LSE, having previously taught Modern History at the University of Sheffield from 2015 to 2019. Her current research explores the circulation of ideas of citizenship and nationality in Europe during the Second World War, most recently, in this article on German ideas of Englishness in the context of wartime internment

Cover image: ‘Wonderful Old Radio Dial’ courtesy of James Cridland, [accessed 11 May 2021].

read more

‘Violent affections of the mind’: The Emotional Contours of Rabies

Rabies pic small

Living through the Covid-19 pandemic has more than drummed home the emotional dimensions of diseases. Grief, anger, sorrow, fear, and – sometimes – hope have been felt and expressed repeatedly over the last year, with discussions emerging on Covid-19’s impact on emotions and the affect of lockdown on mental health.

But emotions have long since stuck to diseases. Rabies – sometimes called hydrophobia – is a prime example.[i] In nineteenth-century Britain, France, and the United States, rabies stoked anxieties. Before the gradual and contested acceptance of germ theory at the end of the nineteenth century, some doctors believed that rabies had emotional causes.

For much of the nineteenth century, the theory that rabies generated spontaneously jostled with the one that held that it was spread through a poison or virus. The spontaneous generation theory stressed the communality of human and canine emotions. Rather than contagion through biting, emotional sensitivity made both species susceptible to the disease.

A sensitive person prone to emotional disturbances was considered particularly at risk from external influences that might cause rabies to appear. “Violent affections of the mind, operating suddenly and powerfully on the nervous system” could in rare cases lead to rabies or, at the very least, exacerbate the symptoms in nervous patients, according to Manchester physician Samuel Argent Bardsley (who was more commonly known for promoting quarantine as a way of containing the disease).

For one Lancashire man, John Lindsay, the difficulty of feeding his family drove him to anxiety and despair, exacerbated by a bout of overwork and a lack of food. Fatigued, suffering from headaches, and fearing liquids, Lindsay remembered being bitten by a supposed mad dog some twelve years previously. Amidst violent spasms, visions of the black dog “haunted his imagination with perpetual terrors” and made recovery seem “hopeless.” With reluctance, Bardsley concluded that this was a case of spontaneous rabies. Emotional distress and an overactive imagination had caused and aggravated the disease.

During the mid-nineteenth century prominent London doctors argued that rabies was closely linked to hysteria and had emotional and imaginative origins, much to the chagrin of veterinarian William Youatt, the leading opponent of theories of spontaneous generation.[ii] In the 1870s alienists (otherwise known as psychiatrists) then lent greater intellectual credibility to theories of rabies’ emotional aetiology. They stressed the powerful sway that emotions and the mind held over individuals, especially in the enervating conditions of modern life.

Physician and prominent British authority on mental disorders Daniel Hack Tuke argued that disturbing emotions and images could create hydrophobic symptoms in susceptible individuals. Referencing Bardsley, and drawing on French examples, he argued that “such cases illustrate the remarkable influence exerted upon the body by what is popularly understood as the Imagination.” The very act of being bitten by a dog and the “fearful anticipation of the disease” was enough to spark rabies , even if the dog was not rabid. Even rational and emotionally-hardy doctors had reported suffering from hydrophobic symptoms when recalling the appalling scenes of distress during the examination and treatment of hydrophobic patients.[iii] 

Tuke suggested that in some cases excitement or other forms of mental, emotional, and sensory overstimulation could activate the virus years after a bite from a rabid dog. He drew on a striking case from the United States, as reported by the Daily Telegraph in 1872. A farmer’s daughter had been bitten by a farm dog when choosing chickens for slaughter. The wound healed and no signs of rabies appeared until her wedding day two months later. The “mental excitement” of this life-changing event brought on a dread of water. After the ceremony she experienced spasms and “died in her husband’s arms.”

Tuke reproduced the newspaper’s view, and more generalized gendered assumptions about female emotional delicacy, that such “nervous excitement” had a profound influence on the “gentler” sex. In this case, her nerves were considered to have been exacerbated by the anticipation of the impending wedding night, which was often framed as an emotionally fraught sexual encounter.[iv]

Dr William Lauder Lindsay of the Murray Royal Asylum in Perth, Scotland, was another prominent proponent of the view that rabies was a predominately emotional disease. The disease, he argued, “is frequently, if not generally, the result of terror, ignorance, prejudice, or superstition, acting on a morbid imagination and a susceptible nervous temperament.” Under the sway of their overactive imagination, an individual could take on “canine proclivities,” such as barking and biting. In classist language, Lindsay argued that rabies showed the influence of mind over the body, especially in the “lower orders of the community.”[v]

The British alienists’ depiction of rabies as a predominately emotional disorder made its way across the Atlantic. In the mid-1870s Dr William A. Hammond, President of the New York Neurological Society and leading American authority on mental disorders, stated that the evidence from Europe suggested that heightened emotions might cause rabies in humans. More generally, New York physicians and neurologists debated whether or not individuals had died from actual rabies or fears of the disease, and discussed how fear might turn a bite from a healthy animal into death.[vi]

The alienists lent greater credibility to earlier theories that rabies anxieties could lead to imaginary or spurious rabies. Tuke asserted that fears of rabies could create an imaginary manifestation of the disease. “Hydrophobia-phobia” demonstrated clearly the “action of mind upon mind,” and was distinct from the “action of the mind upon the body” in those cases when emotional distress led to actual rabies.

Echoing Tuke, Lindsay identified women as a particular vector in triggering spurious rabies. He asserted that they spread rabies fears, as supposedly shown by an Irishwomen in Perth who had frightened her husband into believing he had rabies. For Lindsay, this was a classic case of spurious (or false) rabies, which required the rational and firm intervention of medical men, such as himself, to stamp out. But he felt himself fighting an unstoppable tide. For in America, as well as Britain, the press ignited fears and created spurious rabies in susceptible individuals.[vii]

Lindsay and Tuke believed that rabies could, in some cases, be transmitted by dogs to humans through biting and “morbid saliva.” But some doctors controversially argued that it was a purely emotional disease. Eminent Parisian doctor Édouard-François-Marie Bosquillon set the tone in 1802 when he confidently declared that rabies in humans was caused solely by terror. His observation that individuals were struck with hydrophobic symptoms, including “loss of reason” and convulsive movements,” at the sight of a mad dog provided sufficient proof.

Horror-inducing tales of rabies, fed to children from a young age, created fertile conditions for the development of the disease, particularly in “credulous, timid and melancholic” people. Gaspard Girard, Robert White, William Dick, and J.-G.-A. Faugére-Dubourg developed this line of argument as the century progressed. And the theory had traction. In the 1890s, Philadelphian neurologist Charles K. Mills insisted that rabies was purely a disease of the nerves. Such theories were, however, contentious, and Tuke cautioned against those who asserted that rabies was solely an imaginary disease.[viii]

Nonetheless, these theories cemented rabies as an emotionally-fraught disease and reinforced the dangers of dogs bites: even a bite from a healthy dog could trigger a lethal neurological reaction in the swelling ranks of anxious individuals. 

Dr Chris Pearson is Senior Lecturer in Twentieth Century History at the University of Liverpool. His next book Dogopolis: How Dogs and Humans made Modern London, New York, and Paris is forthcoming (2021) with University of Chicago Press. He runs the Sniffing the Past blog and you can download a free Android and Apple smart phone app on the history of dogs in London, New York, and Paris. You can find Chris on Twitter @SniffThePastDog.

Cover image: ‘Twenty four maladies and their remedies’. Coloured line block by F. Laguillermie and Rainaud, ca. 1880. Courtesy of the Wellcome Collection, [accessed 25 March 2021].

[i] Contemporaries sometimes used “rabies” and “hydrophobia” interchangeably to refer to the disease in animals and dogs, but sometimes used “rabies” to refer to the disease in dogs and “hydrophobia” for humans. With the rise of germ theory at the end of the nineteenth century, “rabies” gradually replaced “hydrophobia.” For simplicity’s sake, I will use “rabies” to refer to the disease in humans and animals unless I quote directly from a historical source.

[ii] Samuel Argent Bardsley, Medical Reports of Cases and Experiments with Observations Chiefly Derived from Hospital Practice: To which are Added an Enquiry into the Origin of Canine Madness and Thoughts on a Plan for its Extirpation from the British Isles (London: R Bickerstaff, 1807), 238-50, 284, 290; “Hydrophobia”, The Sixpenny Magazine, February 1866; Neil Pemberton and Michael Worboys, Rabies in Britain: Dogs, Disease and Culture, 1830-2000 (Basingstoke: Palgrave Macmillan, 2013 [2007]), 61-3.

[iii] Daniel Hack Tuke, Illustrations of the Influence of the Mind Upon the Body in Health and Disease Designed to Elucidate the Action of the Imagination (Philadelphia: Henry C. Lea, 1873), 198-99, 207.

[iv] Tuke, Illustrations,200-1; Daily Telegraph, 11 April 1872; Peter Cryle, “‘A Terrible Ordeal from Every Point of View’: (Not) Managing Female Sexuality on the Wedding Night,” Journal of the History of Sexuality 18, no. 1 (2009): 44-64.

[v] William Lauder Lindsay, Mind in the Lower Animals in Health and Disease, vol. 2 (London: Kegan Paul, 1879), 17; William Lauder Lindsay, “Madness in Animals,” Journal of Mental Science 17:78 (1871), 185; William Lauder Lindsay, “Spurious Hydrophobia in Man,” Journal of Mental Science 23: 104 (January 1878), 551-3; Pemberton and Worboys, Rabies, 96-7; Liz Gray, “Body, Mind and Madness: Pain in Animals in the Nineteenth-Century Comparative Psychology,” in Pain and Emotion in Modern History, ed. Rob Boddice (Basingstoke: Palgrave, 2014), 148-63.

[vi] “Hydrophobia: The Subject Discussed by Medical Men,” New York Times, 7 July 1874; Jessica Wang, Mad Dogs and Other New Yorkers: Rabies, Medicine, and Society in an American Metropolis, 1840-1920. (Baltimore: Johns Hopkins University Press, 2019), 150-1.

[vii] Tuke, Illustrations, 198-99; Lindsay, “Spurious Hydrophobia in Man,” 555-6, 558.

[viii] Lindsay, Mind in the Lower Animals, 176; Édouard-François-Marie Bosquillon, Mémoire sur les causes de l’hydrophobie, vulgairement connue sous le nom de rage, et sur les moyens d’anéantir cette maladie (Paris: Gabon, 1802), 2, 22, 26; Vincent di Marco, The Bearer of Crazed and Venomous Fangs: Popular Myths and Delusions Regarding the Bite of the Mad Dog (Bloomington: iUniverse, 2014), 141-47; Pemberton and Worboys, Rabies, 64; Tuke, Illustrations, 198-99; Wang, Mad Dogs, 151-2.

read more

‘Always protest’? Drag Race, Pathé Newsreels, and Subversion in Mainstream Media


RuPaul’s Drag Race sells itself, and has been praised, as a subversive television series. RuPaul, eponymous creator of the drag contest gameshow, has stated ‘true drag will never be mainstream. Because true drag has to do with seeing that this world is an illusion’. British judge Graham Norton recently claimed ‘there’s something dangerous about drag still’. Echoing this, a contestant queen from the syndicated British Drag Race enthused that ‘Drag was always a protest, a political statement’. Drag Race, participants and producers alike insist, is inherently subversive because drag necessarily challenges the gender norms of ‘straight’ society.

Drag Race has also become a mass media phenomenon. A niche show in 2009, its 13th series premiered this year to 1.3 million viewers. Interviewed, like any self-respecting A-list celebrity, by the Muppets and toting both a Simpsons cameo and a star on the Hollywood walk of fame, RuPaul is arguably the most famous drag queen in the world. This begs the question, can drag retain a subversive edge in mainstream media?

To consider this, it is instructive to look at one of drag’s first brushes with mass media in Britain. It was during the interwar period that drag first appeared onscreen, chiefly through cinema newsreels. Newsreels – short non-fiction topical films summarising the week’s current events – were included in almost every cinema programme until the 1960s. To leaven the news, they frequently featured variety entertainment; offshoot newsreels such as Pathetone were evencomprised entirely of filmed music hall acts.

A well-established form of music hall repertory from the nineteenth century, drag soon found its way into the newsreel. Bert Errol amazed cinemagoers by changing into high drag before their eyes in 1922. West-End comedian Douglas Byng appeared in rudimentary drag singing innuendo-laden falsetto across the 1930s. A 1937 item covered a police pantomime, with multiple shots of officers putting on makeup and dresses. In 1939, six sailors dressed as fairies sang and pranced before King-Emperor George VI during a naval inspection.

This seems remarkable at a time when populist paper John Bull ran editorials attacking London’s queer men for transvestism, castigating them as the ‘painted boy menace’.[1] From the mid-1920s, men wearing women’s clothes and makeup became tantamount to being queer.[2] In the 1930s, it is estimated 40 percent of Britons went to the cinema once and 25 percent twice or more a week.[3] To make drag palatable for the mainstream, newsreels had to ensure conventional manliness remained unchallenged and any association with queerness was muted.

As such, newsreels usually placed drag in establishment settings. Byng was a fixture of London’s fashionable set, always filmed in high-end venues like the Paradise Club, laughing with elites more so than at them. Likewise, Errol’s wife helped him change into drag, making sure audiences knew he was a red-blooded heterosexual, wig and high heels notwithstanding. The police officers and sailors returned to their uniforms, drag but a brief interlude (the naval fairies lasted but twenty seconds onscreen) from their ‘manly’ public service. Ensconced in marriage, elite society, and ‘masculine’ professions, queens could not truly send up the establishment when they were often performing from the heart of it.

Moreover, newsreels always framed drag as comedy. Ian Green has argued comedy allows latitude for contentious topics. Yet, because comedy resolves in laughter, it curtails earnest critique.[4] David Sutton likewise concludes comedy as a genre is ‘the appropriate site for the inappropriate, the proper place for indecorum’.[5] Comedy is establishment-condoned critique, safely dissipated in laughter. All the above acts, awash with puns and gags, aimed to make cinemagoers laugh, not challenge their gendered assumptions. Far from a challenge to the status quo, then, interwar drag acts could only enter mainstream media as safe entertainment bereft of queer connotations.

This is not to say drag culture could not be subversive. For queer men to wear women’s clothes and attend drag balls was certainly a brave and subversive act in the interwar period, one that provoked the British establishment.[6] The interwar life of Quentin Crisp is representative of the defiant subversion that came from wearing cosmetics.

Yet, as Jacob Bloomfield has shown, drag onstage was not inherently controversial and remained a staple of popular theatre.[7] Similarly, filmed drag acts obviated controversy in order to appeal to the broadest possible audience. In fact, looking at newsreel drag items reveals a legacy of conservatism for drag acts in the mainstream.

The producers of Drag Race would like to make their show the heir to the counterculture of drag balls and gay bars. Yet, in many respects, itis the mainstream heir to newsreel variety acts. Like newsreels, Drag Race is foremost comic entertainment, more inclined to jokes than politics. What little gender discussion there is occurs in the fleeting moments between farcical gameshow skits. The only challenges presented are to the competing queens’ dignities.

Like Pathe’s producers, RuPaul has espoused a profoundly conservative view of ‘true’ drag. Through transphobic comments, he has stressed drag as the exclusive province of gay men. Thus, much as newsreels removed any ‘controversial’ association with queerness, so Drag Race has placed strict limits on what drag represents and who can perform it.  

A look at the history of drag in newsreels reveals that to project drag through mass media is not inherently subversive. Whether in Pathé or on BBC3, being produced as mainstream entertainment severely curtails any potential for real subversion of societal norms such as gender. Former drag performer Paul O’Grady, carping in 2017 about Drag Race, contended that his drag persona Lilly Savage ‘belonged in a pub, especially a gay bar, where you could rant and rave’.  Considering drag’s relationship with popular media, perhaps it is only in niche subcultures that subversion can truly flourish.

Conner Scott is a PhD student in the Department of History at the University of Sheffield. His research seeks to explore the role of British newsreels in everyday life, and how they (re)presented the cinemagoing public to itself on a weekly basis between c.1919-c.1939.

Cover image: Manchester Pride Parade 2019. A group of five drag queens representing BBC’s ‘RuPaul’s Drag Race UK’ on pink stage, Manchester, 24 August 2019. Used courtesy of Goncalo Telo for non-commercial, educational purposes.

[1] Matt Houlbrook, ‘“The man with the powder puff” in Interwar London’, The Historical Journal 50.1 (2007), pp. 147-49.

[2] I use the term queer as it was the most common self-identity of interwar men who had sexual and emotional relationships with other men and avoids the anachronism of gay. See Matt Houlbrook, Queer London: Perils and Pleasures in the Sexual Metropolis, 1918-1957 (London, 2005), p. xiii.

[3] Annette Kuhn, An Everyday Magic: Cinema and Cultural Memory (London, 2002), p. 2.

[4] Ian Green, ‘Ealing: In the Comedy Frame’ in James Curran and Vincent Porter (eds), British Cinema History (London, 1983), p. 296.

[5] David Sutton, A Chorus of Raspberries: British Film Comedy 1929-1939 (Exeter, 2000), p. 60.

[6] See Matt Houlbrook, ‘Lady Austin’s Camp Boys: Constituting the Queer Subject in 1930s London’, Gender and History 14.1 (2002), pp. 31-61; Houlbrook, Queer London.

[7] See Jacob Bloomfield, ‘Splinters: Cross-Dressing Ex-Servicemen on the Interwar Stage’, Twentieth Century British History 30.1 (2019), pp. 1-28.

read more

‘Fear or Fetish? The Fetishisation of Lesbians in Cold War America


In the 1950s, American society saw a huge rise in anxieties regarding gender norms and sexuality. Homosexuals were demonized through the Lavender Scare – a moral panic focused on gay and lesbian US government employees – and ideas of the nuclear family were promoted in the fight against Communism. Yet, throughout this period, there was also an influx of highly erotic lesbian fiction and magazines aimed at heterosexual men with overtly sexualised lesbian themes. This sexualisation remains prevalent today and continues to have detrimental impacts upon the lives of lesbian woman,[1] and yet its origins have received little attention in historical debate.

When constructions of homosexuality have been looked at during this period, historians have tended to focus on the political sphere. David Johnson, for example, focuses much of his attention on how anxieties regarding sexuality permeated political culture and the lives of elites.[2] Therefore, little attention is given to popular culture and perceptions of the ‘ordinary’ American citizen. Focusing primarily on political culture also means that Johnson’s narrative mainly looks at how the Lavender Scare impacted wider cultural perceptions of homosexual men.

Consequently, the sexualisation of lesbians by heterosexual men and how this came to the fore with such force during this period has not received necessary attention.

At the end of the war and throughout the 1950s, American society took a conservative turn, with ideas of gender and ‘family’ becoming all the more important as a way to distinguish America from the Communist East. Women were particularly impacted by this growing interest in conformity. As Elaine Tyler May points out, the full-time housewife became synonymous with ideas of American freedom.[3] Anything that deviated from this ideal was therefore seen as a threat.

At the same time, ideas of homosexuality were changing and ‘the lesbian’ was fashioned as an immediate danger. Lesbianism began to be framed as a sickness, but crucially it was a sickness that could be cured – if only a man could show them a “good time”.

Simultaneously, we see the crisis of masculinity. At numerous occasions during this period, historian and social critic Arthur Schlesinger wrote on the issue, arguing that World War II had ushered in an uneasy sense of vulnerability and a loss of a clear sense of self for many men that continued throughout the 1950s. This sense of a decline in manhood’s mastery over others, combined with ideas that lesbians could be ‘regained’ by patriarchal concepts of heterosexuality, meant that ‘the lesbian’ was constructed as an opportunity for men to prove themselves. The post-war into the Cold War period therefore set up the perfect conditions within which the sexualisation of lesbians could flourish.

This resulted in an influx of pulp fiction and men’s magazines, through which these themes were reflected. Stories of lesbian orgies, threesomes and lesbian nymphomaniacs were extremely popular amongst heterosexual men during this period. Within these novels, lesbians are presented as deviants, yet deviants who are often regained by heterosexual, familial norms after experiencing life changing heterosexual sex.

Cover of The Third Sex by Artemis Smith (1963 Edition).

The message is therefore clear. If men show lesbians a good time by reasserting their masculinity, these women will once against fit within the Cold War ideals of conformity – everyone’s a winner.

Men’s magazines took a similar approach. Stories and images of two women looking for a man were extremely popular. What we can learn from 1950s and 1960s America is that sex sells, but lesbian sex sells better.

This had very real life consequences for lesbians, as men encroached on their space in the search of sexual encounters. Analysis of interviews and testimonies show that this repressive context led to a thriving underground lesbian movement and a vast number of lesbian bars being established. Heterosexual men often took advantage of these lesbian spaces, going there in search of lesbian women to have sex with –further demonstrating how they were constructed as an opportunity in the eyes of men.

Ultimately, the period between 1947 up until the stonewall riots of 1969 provided the ideal conditions within which the sexualisation of lesbians could and indeed did flourish. Sexualisation of lesbians is still widespread within our society today and lesbians continue to face challenges of not only being seen as a sexual fantasy but also having their sexuality presented as merely performative and something that can be “regained” by heterosexual masculinity

In numerous recent insight reports, PornHub revealed that ‘Lesbian’ was the most searched for and most viewed category across numerous American states, with 75 percent of the American audience being male. These statistics demonstrate that lesbianism continues to be framed within the male gaze. Sexualisation is not the same as acceptance and therefore it is important that we continue to address its roots in order to hold both society and ourselves accountable today.

Jamie Jenkins is a PhD student at Radboud University working on the Voices of the People  project. Her research investigates how the media constructed popular expectations of democracy in Great Britain between the end of the Second World War and the 1980s. She tweets @jenkinsleejamie

Cover image: Cover of Lesbian Love by Marlene Longman (1960).

[1] See Ofcom’s ‘Representation and Portrayal on BBC TV 2018’ report regarding the representation of lesbian women on television.

[2] David K. Johnson, The Lavender Scare: The Cold War Persecution of Gays and Lesbians in Federal Government (Chicago, 2004).

[3] Elaine Tyler May, Homeward Bound: American Families in the Cold War Era (New York, 1988).

read more

‘I don’t think I’m Wrong about Stalin’: Churchill’s Strategic and Diplomatic Assumptions at Yalta


On 23 February 1945 Churchill invited all ministers outside the War Cabinet to his room at the House of Commons to hear his account of the Yalta conference and the one at Malta that had preceded it. The Labour minister Hugh Dalton recorded in his diary that “The PM spoke very warmly of Stalin. He was sure […] that as long as Stalin lasted, Anglo-Russian friendship could be maintained.” Churchill added: “Poor Neville Chamberlain believed he could trust with Hitler. He was wrong. But I don’t think I’m wrong about Stalin.”[1]

Just five days later, however, Churchill’s trusted private secretary John Colville noted the arrival of:

“sinister telegrams from Roumania showing that the Russians are intimidating the King and Government […] with all the techniques familiar to students of the Comintern. […] When the PM came back [from dining at Buckingham Palace] […] he said he feared he could do nothing. Russia had let us go our way in Greece; she would insist on imposing her will in Roumania and Bulgaria. But as regards Poland we would have our say. As we went to bed, after 2.00 a.m. the PM said to me, ‘I have not the slightest intention of being cheated over Poland, not even if we go to the verge of war with Russia.”[2]

At an initial glance, there seems to be a powerful contradiction between these different sets of remarks. In the first, Churchill appears remarkably naïve and foolish, putting his faith in his personal relationship with a man whom he knew to be a mass murderer. In the second he seems strikingly, even recklessly bellicose, contemplating a new war with the Soviets, his present allies, even before the Germans and the Japanese had been defeated.

Surprising though it may seem, the disjuncture is not as large as it appears on the surface. Relations with the USSR and the future of Poland were not the only things that were at stake at Yalta. The Big Three took important decisions regarding the proposed United Nations Organization, and the post-war treatment of Germany, and even Anglo-US relations were not uncomplicated. In this post, however, I want to focus on the Polish issue and the broader question of how Churchill viewed the Soviet Union and its place in international relations more generally. I will outline three key assumptions that governed Churchill’s approach and which explain the apparent discrepancies in his remarks upon his return.

Assumption 1: The key to the Soviet enigma was the Russia national interest.

This assumption is the one that needs explaining at greatest length. In a radio broadcast given in the autumn of 1939, a month after the outbreak of the Second World War, Churchill told his audience: “I cannot forecast to you the action of Russia. It is a riddle, wrapped in a mystery, inside an enigma; but perhaps there is a key. That key is Russian national interest.”[3]

What Churchill meant was that the Soviet Union was acting on traditional Great Power lines, in a rational and predictable way. This was a striking, and remarkably sanguine, thing to say just a few months after the conclusion of the Nazi-Soviet pact. The pact had clearly not disrupted his conclusion, reached earlier in the thirties, that the USSR was a potentially responsible actor with which it was possible for Britain to collaborate.

That conclusion was in marked contrast to Churchill’s attitude in the fifteen years after 1917. To him, in the aftermath of WWI, the Bolsheviks were ‘the avowed enemies of the existing civilization of the world’.[4] He believed that Lenin, Sinn Féin and the Indian and Egyptian nationalist extremists were all part of ‘a world-wide conspiracy’ to overthrow the British Empire.[5] His central objections to Bolshevism, then, were a) that it involved a reversion to barbarism, and b) that its proponents were attempting to spread its seditious principles globally.

As late as 1931 he was portraying the USSR as a “gigantic menace to the peace of Europe”.[6] There followed almost three years in which he failed to offer substantive comment on the Soviet Union, a period during which, however, he appears to have significantly adjusted his views. The rise of Hitler was of course crucial here. In August 1934, the Sunday Express reported that Churchill had had a change of heart on Russia. An article by the journalist Peter Howard was headlined: ‘Mr. Churchill Changes His Mind: The Bogey Men of Moscow are Now Quite Nice.’[7]

Howard’s piece was prompted by a speech by Churchill the previous month. In this he had praised the proposal – which in fact never came off – of a mutual-aid treaty between the USSR, Czechoslovakia, Poland, Finland, Estonia, Latvia, and Lithuania. This was an idea, Churchill said, which involved “the reassociation of Soviet Russia with the Western European system.” He cited the speeches of Soviet foreign minister Maxim Litivinov. These, he said, “had seemed to give the impression which I believe is a true one, that Russia is most deeply desirous of maintaining peace at the present time. Certainly, she has a great interest in maintaining peace.”

It was not enough, in Churchill’s view, to talk about the USSR as “peace-loving” because “every Power is peace-loving always.” Rather:  “One wants to see what is the interest of a particular Power and it is certainly the interest of Russia, even on grounds concerning her own internal arrangements to preserve peace.”[8] Thus, by the mid-1930s Churchill had reached the conclusion that the USSR had abandoned world revolution and that, acting once again as a traditional Great Power, it shared Britain’s interest in preserving the peace of Europe. This determined his attitude at the time of the Munich crisis in 1938 and held good through to the time of Yalta.

Assumption 2: Stalin would respect ‘spheres of interest’ and the so-called ‘percentages agreement’.

The Moscow summit of October 1944 was the occasion of the notorious “percentages agreement”, via which Churchill believed he had secured Stalin’s consent for the division of the Balkans into British and Soviet spheres of influence. What, if anything, Stalin had really agreed is open to debate.[9]  It is striking, though, that the Soviet press reported that the two men had reached genuine unanimity over Rumania, Bulgaria, Yugoslavia, Hungary, and Greece, and warmly welcomed the “disappearance of the Balkan powderkeg” from the European scene.[10] Crucially, Poland was not mentioned in the agreement. This explains why Churchill did not feel able to protest about Soviet actions in Rumania and Bulgaria yet spoke of his willingness to go to the brink of war over Poland.

Assumption 3: The Polish government-in-exile would best serve its own cause by not rocking the boat, and that Soviet human rights abuses were best swept under the carpet.

This assumption is best illustrated by a 1943 diary entry by Ivan Maisky, the Soviet ambassador to London. This related to the notorious Katyn forest massacre, perpetrated by Soviet forces in 1940; the Nazis had recently announced the discovery of mass graves on territory now controlled by Germany. Maisky wrote:

“Churchill stressed that of course he does not believe the German lies about the murder of 10,000 Polish officers … But is this so? At one point during our conversation Churchill dropped the following remark: ‘Even if the German statements were to prove true, my attitude towards you would not change. You are a brave people, Stalin is a brave warrior, and at the moment I approach everything primarily as a soldier who is interested in defeating the common enemy as quickly as possible.”[11]

Churchill’s real concern was to prevent the affair damaging Anglo-Soviet relations, which he believed the Polish press in Britain was putting at risk. He fulminated to his Cabinet that “no Government which had accepted our hospitality had any right to publish articles of a character which conflicted with the general policy of the United Nations and which would create difficulties for this Government.”[12] One might say that there was a further assumption here, that history was driven by Great Men, like him and Stalin, and that Great Powers could legitimately settle the fates of nations over the heads of their peoples and governments. Omelettes could not be made without breaking eggs.


When he rose to speak in the Commons on 27 February in order to expound the Yalta agreement Churchill stated his impression “that Marshal Stalin and the Soviet leaders wish to live in honourable friendship and equality with the Western democracies. I feel also that their word is their bond.”[13] Justifying this latter claim in his memoirs, Churchill wrote: “I felt bound to proclaim my confidence in Soviet faith in order to procure it. In this I was encouraged by Stalin’s behaviour about Greece.”[14] As we have already seen, however, he claimed privately to be “Profoundly impressed with the friendly attitude of Stalin and Molotov.”[15] Colville wrote: “He is trying to persuade himself that all is well, but in his heart I think he is worried about Poland and not convinced of the strength of our moral position.”[16]

Churchill cannot be convicted of total naivety. There was a degree, certainly, to which he put too much faith in his own personal capacity to win over and deal with the Soviet leadership. But his comments about Stalin’s trustworthiness were to a great extent an attempt to put on a brave face in front of his ministers and the public. He never did make the mistake of assuming that Stalin was a pushover, but he did believe that he would respond to firm handling. More broadly his approach was determined by the belief that the Soviets were rational actors who could contribute to a constructive global order, even as they acted as rivals to Britain and the USA.

The conflict between the remarks recorded by Dalton and those recorded by Colville is explained by Churchill’s belief (or most profound assumption) in managed international rivalry. It was not that he thought that Yalta had solved or prevented conflict between the Great Powers but he believed that this type of international agreement could keep it within bounds. In respect of his apparent belief that Stalin could be induced to accept a free and democratic Poland, it is easy to see that Churchill was indeed wrong. But in regard to his overarching belief that the Soviet regime acted in line with rational calculations about its own national interests, rather than being primarily motivated by communist ideology, he may have been far less wrong than appears at first sight.

Richard Toye is Professor of Modern History at the University of Exeter. He is the author of Winston Churchill: A Life in the News and co-author (with Steven Fielding and Bill Schwarz of The Churchill Myths, both published by Oxford University Press in 2020. He tweets @RichardToye.

Cover Image: Winston Churchill sharing a joke with Joseph Stalin and his interpreter, Pavlov at Livadia Palace during the Yalta Conference in February 1945.

[1] Ben Pimlott (ed.), The Second World War Diary of Hugh Dalton, 1940–1945 (London: Jonathan Cape, 1986), p. 836 (entry for 23 February 1945).

[2] John Colville, The Fringes of Power: Downing Street Diaries 1939-1955 (London: Phoenix, 2005), p. 536 (entry for 28 Feb. 1945).

[3] Broadcast of 1 Oct. 1939.

[4] Speech of 3 Jan. 1920.

[5] Speech of 4 Nov. 1920.

[6] ‘Winston Churchill Sees Soviet Russia as Gigantic Menace to the Peace of Europe’, New York American, 23 Aug. 1931.

[7] Sunday Express, 26 Aug. 1934.

[8] Speech of 13 July 1934.

[9] See Albert Resis, ‘The Churchill-Stalin Secret “Percentages” Agreement on the Balkans, Moscow, October 1944’, American Historical Review, Vol. 83, No. 2 (Apr., 1978), pp. 368-387.

[10] W.H. Lawrence, ‘Russians Indicate Unity on Balkans’, New York Times, 22 Oct. 1944.

[11] Gabriel Gorodetsky (ed.), The Maisky Diaries: Red Ambassador to the Court of St. James’s 1932-1943, Yale University Press, New Haven CT, 2015, p.509 (entry for 23 Apr. 1943).

[12] Cabinet Minutes, 27 Apr. 1943, WM (43) 59th Conclusions, CAB 65/34/13, The National Archives, Kew, London.

[13] Speech of 27 Feb. 1945.

[14] WSC, Triumph and Tragedy, p. 351.

[15] WSC to Clement Attlee and James Stuart, 14 Feb. 1945, Churchill Papers, CHAR 9/206B/207.

[16] Colville, Fringes of Power, p. 565 (entry for 27 Feb. 1945).

read more

What’s in a Special Relationship?


The recent decision by US President Donald Trump to remove some American troops from Germany has brought much consternation to the international community. One interesting twist that has found its way into the conversation occurred when Anthony Blinker, policy advisor to presidential candidate and former Vice President Joe Biden commented that the move weakened NATO and harmed Germany, ‘our [America’s] most important ally in Europe.’ Many on both sides of the Atlantic gasped at this comment, but none more so than those in the United Kingdom. The truth of the matter is – and this may come as a shock to some – that the United States has never seen the Anglo-American relationship as special. Yes, there are cultural and linguistic commonalities, but when it comes to foreign policy, the United States’ view on Britain and Europe does not match that of an Anglo-American ‘special relationship’.

It would be fair to say that Winston Churchill’s consistent message of a Special Relationship between Great Britain and the United States has ingrained the phrase in the minds of most citizens of both countries. Nevertheless, from a governmental and policy position, it has traditionally been a one-sided relationship. American leaders have rarely used the phrase and even more rarely acted on it to the point that former German Chancellor Helmut Schmidt is reported to have said the ‘British clam to have a special relationship with the US, but if you mention this in Washington, no one knows what you are talking about.’ This idea was reinforced during the Brexit debates when US President Barack Obama stated that the UK would find itself at the back of the queue in US trade negotiations. The last fifty years provides a clearer understanding of how the US views the ‘Special Relationship.’

It would also be fair to say that since the end of the Second World War, US Foreign Policy has focused on a strong Europe. The ‘Special Relationship,’ as a purely Anglo-American relationship, is very much a British view. This does not mean that the US has not or does not value Britain. What is often forgotten, intentionally or not, is the importance of Europe to US foreign and trade policy since 1945. During the Second World War, the US and Britain, along with the Soviet Union, stood side-by-side to defeat the Axis. Once the war was over, and the Cold War began, the relationship between the US and Britain changed. What began as a strategic and military partnership during the Second World War quickly morphed into a relationship between two unequal partners. Despite Britain’s continually diminishing status, US presidents from Truman to Clinton understood the value of working with the British to meet US foreign policy goals.[1]

Nevertheless, US presidents have also focused on a strong Europe. Successive US presidents supported British involvement in different European projects. Dwight D. Eisenhower as Supreme Allied Commander Europe and later as President was firm in his belief that any plan to defend Europe required a British commitment to the continent. As such, he continually pushed Churchill, and later Eden and Macmillan, to take a more active role in NATO and the European Economic Community, which they eventually did.

The collapse and break-up of the Soviet Union in 1991 left US leaders believing they did not need multilateral alliances. The US was and is, after all, the lone superpower. Since this time, presidents from both parties have chosen to ‘go it alone.’ In the meantime, Britain failed to stop its slide away from world power status. True, London remains one of the great financial centers in history but as a nation, they no longer have the military power to be more than a limited partner on the world stage. A no more shocking example of how far Britain’s defense capabilities have fallen can be found in the fact that the Royal Navy is now smaller than Pakistan’s navy and only slightly larger than Qatar’s, and the Royal Air Force is about the size of the Brazilian air force.[2]

Under George W. Bush and Barack Obama, it appeared that the US was moving closer to Germany as its leading partner in European issues. This was not a new position, per se, and it was not a result of Germany’s military prowess (it is also struggling to maintain a large and functioning force) but due to its economic power. The US position since 1945 has been to forge a durable transatlantic link between the US and Europe.[3] At the beginning of the twenty-first century, Germany had the fourth-largest economy in the world with a GDP that was more than $1 trillion larger than that of Britain. What is often overlooked in all of the discussion about America pulling closer to Germany and further away from Britain, or about the withdrawal of US troops from Germany is Europe’s importance to the US.

A look at the Bank of England’s Quarterly Bulletin provides an idea of how important Europe is to the US relative to the UK. America’s most trusted trade partners are still the United Kingdom and Europe. As the year 2020 rolls towards the last quarter, Germany is feeling angst about its special relationship with the US. While the US president drives that anxiety, a reversal of roles may be in the offing. With US politics becoming less reliable in recent years, Europe might decide to no longer rely on the US and ‘go it alone,’ just as the US did in the 1990s. However, with reports that Johnson’s government is secretly ‘desperate’ for a Biden victory in hopes of a revived comprehensive trade plan the chances of a Europe without the US seem small.  In light of Brexit, the UK might think about how the US has historically viewed the special relationship. For the US, the relationship that is and has always been special has been with Europe – a Europe that includes Britain.

Justin Quinn Olmstead is currently Associate Professor of History and Director of History Education at the University of Central Oklahoma with a Concurrent Appointment in the College of Arts and Humanities at Swansea University, Wales as Affiliate Faculty with responsibility for doctoral research supervision. He has edited two books, Reconsidering Peace and Patriotism during the First World War (Palgrave Macmillan, 2017), and Britain in the Islamic World: Imperial and Post-Imperial Connections (Palgrave Macmillan, 2017). Dr. Olmstead has also published, The United States’ Entry into the First World War: The Role of British and German Diplomacy (Boydell & Brewer, 2018). He has contributed a chapter on the impact of military drones on foreign affairs in The Political Economy of Robots, (Palgrave Macmillan, 2018). Currently, he is the Assistant Editor for The Middle Ground Journal, Treasurer and Director of Membership for Britain and the World, and president elect of the Western Conference on British Studies. Just undertook his PhD at the University of Sheffield — you can find him on Twitter @OlmsteadJustin

Cover image: NATO 3-cent 1952 U.S. stamp, issued at the White House on April 4, 1952, honored the North Atlantic Treaty Organization (NATO). [Accessed 11 August 2020].

[1] Melvyn P. Leffler, A Preponderance of Power: National Security, the Truman Administration, and the Cold War (Stanford: Stanford University Press, 1992), p. 61.


[3] Timothy Andrews Sayle, Enduring Alliance: A History of NATO and the Postwar Global Order (Ithaca: Cornell University Press, 2019), p. 3.

read more
1 2 3 4
Page 1 of 4