An Emotional Week in Berlin

Richard Firth-Godbehere is a Wellcome Trust funded PhD Candidate in the Medical Humanities at QMUL, trying to work out how revulsion came to be associated with the word ‘disgust’ in the middle of the seventeenth century, and how revulsion and repulsion were understood before this point. In this blog post he reports on the 2014 Summer School on emotions and Begriffsgeschichte at the Max Planck Institute for Human Development in Berlin.


At the end of September 2014, I attended a Summer School dedicated to the topic of ‘Concepts, Language and Beyond: Emotions between Values and Bodies’ at the Max Planck Institute for Human Development, Berlin. Over five and a half days, a panel made up of academics and PhD candidates from around the globe, and faculty members from the institute’s own Centre for the History of Emotions gathered to struggle with the question of how Begriffsgeschichte, (Conceptual History), could shake off its linguistic shackles and move beyond into the worlds of the visual, material, the emotions and embodiment.

Summer School September 2014

Participants at the 2014 History of Emotions Summer School in Berlin.

The school heard presentations, debated set texts, and discussed short texts and presentations by attendees on various topics around a number of main themes: Temporality, power, religions and secularity, translation, and affect theory. There were three main reoccurring areas that resurfaced throughout the week: Spatiality, Temporality and Power. This will be an overview of these main reoccurring points and what I learned from my time there, after the most obvious question has been addressed: what is Conceptual History?

What is Begriffsgeschichte?

Reinhart Koselleck

Reinhart Koselleck

Begriffsgeschichte, or Conceptual History, is a particular type of history that attempts to reach basic concepts through the language used to describe them. It will, for example, take a concept like ‘civility’ and examine the ways those words related to ‘civility’ – its semantic net – develop over time, changing the meaning of the concept as they do so. This way of approaching a history of ideas was first formulated by Reinhart Koselleck, particularly in his six volume collective work Geschichtliche Grundbegriffe (Basic Historical Concepts). In his 1972 introduction to the work, Koselleck explained that his methodology entailed identifying ‘basic concepts’, or as he explained ‘not those technical terms registered in the handbooks methodological works of historical scholarship … but those defining concepts (Leitbegriffe) which must be studied historically’. This includes key terms, self-characterisations, slogans and ‘concepts which claim to the status of theory (including those of ideology)’.[1] The idea is that as the semantic nets relating to these concepts change their meanings, so do the meanings of the concepts themselves and the one can be interpreted through the other. This could be a powerful method, especially when applied to the language surrounding emotional concepts, but it does have its drawbacks. Those drawbacks, and attempts to surmount them, were the primary focus of the summer school.

Concepts in Space

The first reoccurring problem is that of transnational and multi-lingual Conceptual History. For example, how can a methodology that focuses on semantic nets be applied to Transnational or even Global History? The worries here are many. Do the concepts related to those words share any family resemblance at all from culture to culture? It is true that concepts do cross borders, but how can Conceptual History follow that transfer without entering into the further problems of translation? Is it possible to find a universal, global, basic concept without imposing Eurocentric or North American assumptions that might distort what that concept really means in, say, India? It seems to me that this is not simply a problem of transnational historiography, but one that can exist within nations. For example, how do we know a well-educated elites understanding of the concept of love is the same as that of an illiterate worker? How can we be sure that a Latin text discussing the passions yields a semantic net that can be mapped onto a vernacular text? A thorny knot indeed, and perhaps one that Conceptual History alone will find difficult to untangle.

This focus on the spatial aspect of Conceptual History also arose when discussing visual culutres. This extra-linguistic spatial element seems to be missing from Conceptual History altogether. When we visited Gemäldegalerie, Berlin’s excellent pre-Renaissance art collection, my repeated asking of the tried and trusted questions –  ‘where did this come from?’ ‘where was it in the building?’ ‘who commissioned it?’ ‘how much did it cost?’ etc. – seemed almost like I was uttering a foreign language (and in fact I was speaking English, so technically,  I was). Yet, for Conceptual History to break free from linguistic shackles, it cannot be afraid to use methodologies that have served other disciplines, such as visual cultures, so well.

Concepts in Time

Koselleck PracticeAnother important part of the discussion focused on temporality. Central to this was Koselleck’s idea of layers of time, as well as the methodological problems and opportunities of Ernst Bloch’s idea of nonsynchronicity and the barely translatable Die Gleichzeitigkeit des Ungleichzeitigen: the synchronicity of the nonsynchronous or the simultaneity of the non-simultaneous. This is best expressed through Koselleck’s phrase, ‘it is, after all, part of our own experience to have contemporaries who live in the Stone Age’ (Reinhart Koselleck, The Practice of Conceptual History: Timing History, Spacing Concepts, trans. Todd Samuel Presner, Stanford University Press, 2002, p. 8). Putting the teleological and Eurocentric implications of Koselleck’s words to one side, concepts develop, wax and wane, at different speeds in different places. In emotions, it is perhaps William Reddy’s Emotional Regimes and Barbara Rosenwein’s Emotional Communities that can best exemplify this. An understanding of hate that is understood or expected by one section of society may not be by another, who holds onto older, newer, or rediscovered conceptions of hate that are quite at odds.

This leads to another temporal worry, that of anachronism. One of our discussions, inspired by Margrit Pernau’s research, focused on the concept of nostalgia as a political emotion in late-nineteenth and early-twentieth-century India. The immediate problem here was: can you really describe people from another culture in another time as having felt ‘nostalgia’ in a western, modern sense? The answer seems to be in how you frame the concept. If the study explicitly states that it is using the term ‘nostalgia’ (or ‘disgust’ or ‘love’) in an instrumental, or even anthropological, way, then that would be fine. Saying ‘I am using nostalgia as we know it in only to see what was going on and being expressed in this particular instance’ would allow similarities and differences, even the very existence or non-existence of the emotion, to be teased out. If, however, you were to reach back into the past and impose a modern concept onto it, that would be deeply problematic.

Concepts of Power

Power was a topic expressed implicitly and explicitly. Concepts of power, particularly the use of the word ‘Injury’ in the run up to the Opium Wars and ‘Terrorism’ as an instantly vilifying and othering word, were discussed, but often the power in question was found in other areas. For example, what power or powers were responsible for concepts changing? How do we identify that? How political are these concepts? Should we examine such concepts through philosophical and elite expressions of them or from below? If the later, how can an area like subaltern studies – the study of those whose voice is all but lost from history, or those who are oppressed by another group – get to these concepts without, as was suggested, redefining what a subaltern group is? These questions seem, to me, to be the thorniest issues surrounding attempts to take Conceptual History beyond text and lead to another set of questions. For example, how do you approach the political and power relationships of the body through Conceptual History? Could this be done through the study of how the bodies and sense were used in the formation of cultures? How could you apply the ideas of Conceptual History to politically motivated building projects, such a communal living, or religious architecture and its associated artwork?

An answer was suggested using a fascinatingly Lockean idea that Concepts are understood as the sum of how our senses understand the external, objective, material world. This means it could include words as well as related images, emotions, sounds, expressions and other things related to this. The example given by Imke Rajamani in her talk on ‘Anger in popular Hindi cinema: Exploring a concept beyond language’ was the Bollywood genre of ‘Angry Young Man’ films and its multisensory depiction of anger through colour, facial expression, elements of mythology – in this case the beliefs surrounding Diwali – and expected tropes in the soundtrack. This was interesting, and as with visual cultures, is a kind of analysis that practitioners of Visual Anthropology, Visual Sociology, and film studies do rather well. To take methods from these disciplines would be a good way forward, but it does not take us any closer to the examination of power.

In addition to the power at play in history, the conversation also regularly returned to power and politics flowing through the historian his or herself, especially the thorny issue of Eurocentrism. When exploring concepts beyond borders, in others’ bodies, and across social groups, the historian wields a great deal of power. Conceptual Historians can easily write in concepts that did not exist, putting words into people’s mouths and badly translating historical ideas into modern European concepts. Ultimately, it is the historian who has the power to change concepts and meaning from one constructed era to another, not the people of the time, and this power relationship has to be carefully balanced before the genuine political and power relationships can be understood. The same is true with emotions: it is easy to take modern emotional concepts and try to find them in the past, as in the case of nostalgia. This leads me to the biggest problem I see in the whole endeavour.

Conclusion

AltruismConceptual history is tool, but it has a major flaw. Changes in words do not change concepts, rather, changes in concepts cause words to be used differently. While expanding beyond text into a semiotic net is a way forward, what this should show those using this type of contextual history is that the concept is the tree upon which signs and words hang, not the other way around. This means there are two options. The first is to do word histories or historical semantics, à la Thomas Dixon (for instance in his 2008 book, The Invention of Altruism), in which words are studied closely as usages are attached, altered, discarded and reinvented. The second is to follow the concept and show how the words and signs relating to the concepts change, adapting and altering as a culture grapples with it. In short, it means separating concepts from signs and language, and thereby ripping the heart out of Conceptual History.

For emotions, this is no bad thing. Word histories can trace the way a word, or even a sign, changed its relationship to emotions over time. The concept first method is particularly useful in affect histories, for example, following the ways in which people attempted to define difficult to express feelings and experiences, but neither are Conceptual History. Some of the tools of Conceptual History may well remain useful to the History of Emotions, such as examining basic concept-semantic/semiotic net relationships in a particular point in time, but it would become a tool in a much bigger type of history: A History of Emotions.

Follow Richard on Twitter: @AbominableHMan


[1] “Introduction (Einleitung) to the Geschchtliche Grundbegriffe“, Trans. Michaela RIchter, in Contributions to the History of Concepts,  6, no.1, (2011)  7-37, quotations at pp. 7, 8.

The Falling and mass psychogenic illness

1182063_The FallingThis weekend I saw a film called The Falling, shown at the London Film Festival. It’s directed by Carol Morley, and is centred on an epidemic of fainting at an English girls’ school in the 60s. It’ll be in cinemas in spring next year.

Two teenage girls – Lydia and Abbie –  are best buddies, but then Abbie loses her virginity to a boy, leaving Lydia feeling abandoned and jealous. Abbie tries to explain what sex is like – ‘it’s a little death…it takes you to another place’…Lydia is desperate to escape into that other place. All is not right in her world – her dad ran off, her horny brother is into the occult, and her emotionally distant mother is agoraphobic.

Lydia then faints in a class, in a rather dramatic fashion, and soon other girls are following suit. Even a young art teacher succumbs to the spell. Are the girls faking it to get attention? Is it mass hysteria? An outbreak of the libido from the unconscious? Or has the charismatic Lydia become some sort of portal or channel for occult energy from the environment?

Morley has previously explored ‘mass psychogenic illness’ (‘psychogenic’ means illnesses where there’s a mental cause for physical symptoms) in a short film called The Madness of the Dance, in which a professor of medical humanities takes us on a tour of the condition: the dancing manias of the Middle Ages, outbreaks of biting and mewing like cats among young nuns, epidemics of laughing among Tanzanian factory workers, and so on. Here’s how Fox News covered a recent case:

What’s going on in such cases? They seem to involve what psychologists call the placebo or nocebo effect – our bodies and immune systems are highly connected to our emotions and imaginations, and physical symptoms like nervous tics or compulsive laughter can spread between people through a sort of sympathy and suggestibility.

The preacher Jonathan Edwards observed this phenomenon in the mass ecstasy of the First Great Awakening in 18th century America, during which congregations fainted, screamed, sobbed, laughed and danced wildly. In his masterpiece Religious Affections, Edwards tried to discern what was genuinely spiritual in these mass ecstatic outbursts, and what was psychological or pathological. He suggested that sometimes it is more the influence of custom or imitation than a genuine visitation of the Spirit – people are following a learned script.

k6724Similar mass ecstatic outbreaks regularly occur in charismatic churches – most recently in the Toronto Blessing of 1994, which spread to the church I sometimes go to, Holy Trinity Brompton. I’ve been in the middle of highly charismatic services in Wales, with people fainting and rolling on the floor, and had some experiences like that myself. Definitely, people are following a script, and the physical symptoms are triggered by their expectations (they came to get down, as it were). But there may be something more at work, too…

Such outbreaks of ecstasy can also occur outside the church, for example in raves. In the 1990s, at the same time as the Toronto Blessing, acid house and trance music spread across the UK, including to the Hacienda, where Carol Morley regularly went. I wonder if her interest in this area partly stems from that experience of ‘the madness of the dance’ – it’s certainly what got me interested in this area. Think of, say, Beatlemania, or the Jitter-Bug, or girls screaming as Elvis twitches and sings ‘well bless my soul, what’s wrong with me, I’m itching like a man in a fuzzy tree…’

Such outbreaks clearly have social determinants: they can be a reaction to overly rigid, hierarchical or depressing social conditions, a reaction to the discontents of civilisation, to the role you are expected to play – this was ably explored by Erika Bourgignon in her 1973 book, Religion, Altered States of Experience and Social Change. Humans need ways to lose themselves, to go beyond the ego and go to ‘another place’, and if their culture doesn’t give them that, nature will find a way.

But is there anything spiritual in such occurrences, or are they just regressions to primitive or infantile stages of development, as Freud would suggest?

Balancing medical and spiritual explanations

Morley tries to keep the question open and ambiguous in her film, to balance medical explanations with more spiritual explanations – that the outbreak is somehow connected with the occult, with lay-lines, with a numinous energy in nature.

But it was interesting, in the audience Q&A for the film, how audience members went straight to the medical explanation: this was a film ‘about’ mass psychogenic illness or mass hysteria. It reminded me of the reception of the film Shame, which my colleague Katherine Angel noted was quickly boxed off as a film ‘about’ sexual addiction.

The possibility that this is also a film ‘about’ spiritual energy was completely ignored – although Peter Bradshaw’s review of the film in The Guardian was open to that possibility.

As Steve Taylor notes in this excellent essay, most traditional cultures have some concept of spiritual energy – it is called shakti or prana or kundalini in Hindu culture, chi in Chinese culture, mana in Australasian cultures, pneuma or the Logos in ancient Greek culture, wyrd in pagan culture, spiritus in Christianity.

There’s a common idea in every culture (except the modern secular west) that nature is infused with spiritual energy, and we can tap into it and access its power, either consciously – through worship or meditation or drugs or sex or magic – or unconsciously and accidentally, through spiritual experiences, near-death trauma, or sudden epidemics like dancing manias.

We seem to access this energy via altered states of consciousness, or what William James called ‘the subliminal self’. It also sometimes involves certain places – pilgrimage sites, particular mountains or fields. And this energy can apparently spread from consciousness to consciousness, as it did in the Great Awakenings of American Christianity.

The modern, secular, mechanistic culture of the West defined itself against this idea, and debunked successive traces of it – whether that be Descartes’ ‘animal spirits’, the elan vital of Vitalism, Mesmer’s ‘vital fluid’, or the entire ‘spiritual energy’ industry of the New Age. That ‘exorcism of spirits‘ from secular culture was not altogether a bad thing, because the concept was often used as a means to exploit or control the gullible. You must pay / obey this person, because they have incredible ‘spiritual energy’ and you can only access it through them – an old but powerful lie.

And yet we’re still haunted by the ancient idea of spiritual energy – Freud called it the libido, Max Weber called it charisma, William James spoke of ‘energy’ that can be accessed through spiritual or subliminal experiences, while today’s much more cautious psychologists still reach for terms like ‘mental capital’, ‘pool of attentional resources’ or ‘psychic energy’.

No one has ever found this energy or empirically measured it, so it’s easy to dismiss it as woo-woo, a vestige of the animist past we have thankfully left behind. But I’d suggest that most people engaged in some sort of spiritual or psychedelic practice have had some experience of accessing this power, and know that it can be both positive and healing, and also terrifying and disorientating.

dancing_in_the_streets220It’s interesting that the author of the last mainstream book to explore dancing manias – the sociologist Barbara Ehrenreich – recently ‘came out’ about having had a life-changing experience of numinous energy in nature when she was a teenager (she was immediately put on the Skeptic blacklist as a result).

Personally, I am inclined to believe this energy exists in nature and is connected to our consciousness, and that we can align ourselves with it through spiritual practice. But I may very well be wrong. We should simply admit that we don’t yet know – as the psychologist Mihaly Cziksentmihayli recently noted, psychology doesn’t even have a working understanding of ordinary consciousness yet, let alone altered states of consciousness.

What is certainly the case is that some film-makers are exploring this shadowy area in interesting films – I wrote a piece last week about ‘the art of trance’ in the films of David Lynch, Fellini, Kubrick and others. Peter Weir explores it beautifully in Picnic at Hanging Rock, which is all about the dark numinous power of nature. In recent British cinema, films by Ben Oakley and Pawel Pawlikowski explore this dreamy terrain.

Morley’s film explores this zone too. It’s not just about mass psychogenic illness…it’s possibly bit more spooky than that. It would be great to get Carol to come and talk about the phenomenon at the Centre!

Melancholia and the Problem of Retrospective Diagnosis: Post-Conference Thoughts

Dr Asa JanssonDr Åsa Jansson recently completed her PhD at the Centre for the History of the Emotions at Queen Mary University of London. Her thesis mapped the re-conceptualisation of melancholia as a modern biomedical mental disease in Victorian medicine.

In this blog post Åsa offers her reflections on the conference ‘Gloom Goes Global: Towards a Transcultural History of Melancholy since 1850’, held at the Cluster of Excellence: Asia and Europe in a Global Context, Karl Jaspers Centre for Advanced Transcultural Studies, Univeristät Heidelberg, October 2-4, 2014.


Some topics never seems to go out of fashion. At least since antiquity, philosophers, writers, artists, and doctors have been preoccupied with melancholy, or melancholia. The words derive from the Greek μέλας (melas) χολή (kholé), meaning ‘black bile’. For centuries it was believed that black bile (one of the four bodily humours) originating in the gastric region would overflow and rise to the head, cloud the soul and cause sadness and dejection. Humoural theories have long fallen out of use, but melancholy continues to evoke questions to which no definitive answers exist. What is melancholy? Is it an emotion? An illness? A literary trope? What does it mean to be melancholic? Have people everywhere, in all times, experienced melancholy? And what – if any – is the difference between melancholy and melancholia? These questions (and many others) were addressed last week at the conference ‘Gloom Goes Global: Towards a Transcultural History of Melancholy since 1850’, hosted by the Asia-Europe Cluster of Excellence at Heidelberg University.

Is this what melancholy looks like? Image from the conference poster. «Dies ist das letzte, was der Mensch noch nach seinem Tode zu erwarten hat» Pencil drawing from an exercise book by Johann Faulhaber, ca. 1908 ©Sammlung Prinzhorn Heidelberg, Inv.Nr. 1511

Is this what melancholy looks like? Image from the conference poster. «Dies ist das letzte, was der Mensch noch nach seinem Tode zu erwarten hat» Pencil drawing from an exercise book by Johann Faulhaber, ca. 1908 ©Sammlung Prinzhorn Heidelberg, Inv.Nr. 1511

The conference lived up to its name. The meeting in Heidelberg was both genuinely transcultural and interdisciplinary, reflecting the ambivalent – or rather multivalent – nature of the topic at hand. As anyone familiar with the history of melancholy will know, this word has never referred to one, easily identifiable thing. Moreover, discussions of melancholy tend to draw in a number of other words and concepts that have at various times been perceived as closely related to melancholy and melancholia. Thus, the conference also generated discussions about sadness, grief, neurasthenia (or nervous exhaustion), suicidality, vishaad (dejection), depression, trauma, and nihilism in contemporary art, to name a few.

Over three days of lively discussions, a number of questions emerged that have particular bearing upon the history – and historicity – of the emotions. These were questions of the universality of emotions (do global emotions exist?), of experience versus expression of emotions (are both culturally produced?), and of the ever-shifting and difficult to define border between normal and pathological emotions. Finally, the tendency to relate, or even equate, melancholy with a number of other emotional terms or phenomena brought into focus the question of whether the history of melancholy is best understood as the history of a word or the history of a concept – or both.

Dr Frank Grüner from the Asia-Europe Cluster of Excellence opens the conference (October 2, 2014).

Dr Frank Grüner from the Asia-Europe Cluster of Excellence opens the conference (October 2, 2014).

But the most heated debates concerned the question of what historians of medicine like to refer to as ‘retrospective diagnosis’ – that is, the act of projecting current medical knowledge onto the past, and diagnosing historical subjects with today’s diseases. In this way, Victorian physicians diagnosed Shakespeare’s Hamlet with melancholia1, late-twentieth-century psychiatrists have given World War I soldiers PTSD2, and medieval saints have been described as schizophrenic.3

It is difficult to spend any considerable time researching the history of melancholy without coming up against the question of retrospective diagnosis. Not least because, in the twenty-first century, when we speak or write about melancholy we tend to do so with another ubiquitous phenomenon in mind – depression. While few would perhaps argue that the two words are synonymous, there is nevertheless a sense that they are closely related. Both denote low mood, dejection, sadness, and both can be used as a an adjective for any kind of sad event or experience – a depressing film, for example – while both can also be used to speak about a medical condition. However, unlike in centuries past, melancholy is no longer a term favoured by the medical community itself, at least not in the English language, which distinguishes between (non-pathological) melancholy and (pathological) melancholia. It is the latter of these – melancholia as a medical condition – that has featured most prominently in late-modern discussions of retrospective diagnosis.

A despondent winged female figure holding a geometrical inst

Renaissance artist Albrecht Dürer’s famous depiction of melancholia, 1514. Image credit: Wellcome Library, London.

The three terms melancholy, melancholia, and depression have overlapped throughout history, and in a broad, general sense the use of the latter has grown increasingly popular as the former two have declined. This does not mean, however, that ‘depression’ has simply replaced melancholy and/or melancholia. That there exist such a vast number of different historical narratives about melancholia, melancholy, and depression is not simply a result of different perspectives among today’s historians. Rather, it is a testament to the vast and shifting meanings that these terms have possessed over time. When it comes to melancholia in particular, the word has been used at least since antiquity to describe illness, but not one uniform disease. Thus, rather than speaking about melancholia as a single concept, the word is best understood as corresponding to a number of different – though often overlapping – disease concepts over the last two millennia.

Nevertheless, scholars writing about any of the various historical melancholias have often done so under the assumption that underlying cultural and temporal differences in language and understanding is a more or less timeless condition. In particular, critics of the current model of ‘clinical depression’ that appears to grow increasingly inclusive and opaque, have turned to past descriptions of melancholia in an attempt to show that there exists a core condition – a severe form of depressive state accompanied by psychosis – that has remained relatively stable across time, but which has been eclipsed by the current fashion of extending the term ‘depression’ to an increasingly wide range of emotional states. At the opposite end of the spectrum, historians critical of a positivist and presentist approach reject any notion of a timeless biological model, arguing that emotions are culturally produced and cannot be easily transposed across time based upon present knowledge.

But perhaps the more pertinent question we should ask is not whether we can plausibly diagnose people of the past with today’s illness categories, but rather, should we? What value does this act have? What purpose does it serve? The motivation appears in the first instance to be a legitimisation of present medical knowledge. In other words, if we can show that people have suffered from the same illness throughout history, this knowledge affirms that the illness is, in fact, real. But does it? As one of the conference participants demonstrated, there exists, in the present, sound and verifiable empirical data that supports the existence of a relatively uniform, delineated medical condition that is referred to in current psychiatric literature as ‘psychotic’ or ‘melancholic’ depression, or simply as melancholia. The crux is that the ‘Bible’ of Western psychiatry, the notoriously politicised Diagnostic and Statistical Manual published by the American Psychiatric Association, does not recognise this condition, meaning that people suffering from it risk getting misdiagnosed and consequently excluded from receiving treatment that has been demonstrated to be the most effective in alleviating the symptoms of this debilitating illness. There is, then, undoubtedly an urgent need to demonstrate the validity of this diagnosis.

A woman diagnosed as suffering from melancholia. Lithograph,

A woman diagnosed as suffering from melancholia. Lithograph, 1892, after a drawing made for Sir Alexander Morison. Image credit: Wellcome Library, London.

But why do we presume that today’s scientific knowledge is legitimised through a perceived universality across time? This view arguably derives from a Baconian perception of ‘nature’ as something that human beings can observe, intervene with, and learn from, and about which universal truths can be demonstrated through an inductive approach to knowledge. But as historians of science are well aware, this idea of nature is itself historically specific. And, more importantly, the scientific method cannot be applied to long-dead historical subjects whom we believe to have suffered from melancholia, nor to the documents they have left behind.

I would suggest that current scientific knowledge about melancholia doesn’t gain its validity and legitimacy from a presumed timelessness and universality. Rather, it is valid and legitimate because it works in the present and, therefore, holds true right now. Projecting it onto past and long gone individuals who are only names on papers does not help to make it more ‘true’ in the present. What it does, however, is threaten to demote history from its place as a rich, constructive, and critical human science, a science that enables us to gain a different kind of, tremendously valuable, insight, by showing us how things change, and how knowledge is produced, instead reducing it to a one-dimensional discipline, the main (or even only) task of which is to lend legitimacy to current knowledge within the natural sciences.

So, perhaps we would do better to separate the two. What we know today is not any less valuable or helpful because it only applies to the present and not the past. When it comes to treating people, to alleviating ‘medicalised sadness’, it is our actions in the present and the future that matter. This is the real value of medical knowledge – what we can do with it right now. And the real value of history is not as a legitimising tool for such knowledge, rather it is to show how present knowledge (medical or any other) was created, thus helping us gain a broader, deeper, and richer understanding of the human condition. A more fruitful way to view the relationship between the human and natural sciences, then, is as a complementary and mutually challenging one, where both can learn from each other.

And when it comes to melancholy, melancholia, depression, and any of the other phenomena that seem to attach themselves to these terms, I would not be surprised if scholars are still debating – and disagreeing upon – the meanings of these when those of us who participated in the Heidelberg meeting last week have ourselves joined the ranks of historical subjects.

Åsa Jansson

——–

1. W.F. Bynum and Michael Neve, “Hamlet on the Couch” in The Anatomy of Madness: Essays in the History of Psychiatry, Vol. I: People and Ideas, eds. W.F. Bynum, Roy Porter, and Michael Shepherd (London: Routledge, 1985), 290.

2. Ian Hacking, Rewriting the Soul: Multiple Personality and the Sciences of Memory (Princeton, NJ: Princeton University Press, 1995) [Chapter 17: ‘An Indeterminacy in the Past’]; Richard Norton-Taylor, “Executed WWI Soldiers to be Given Pardons”, The Guardian (Wednesday 16 August 2006) http://www.guardian.co.uk/ uk/2006/aug/16/military.immigrationpolicy (last accessed 08/06/2013).

3. Jerome Kroll and Bernard Bachrach, The Mystic Mind: The Psychology of Medieval Mystics and Ascetics (New York: Routledge, 2005), 25-28.

 

 

Nancy Sherman, the soldiers’ philosopher

20091102+Nancy+Sherman_0018Professor Nancy Sherman has worked with the US military for over 20 years, and has written several books on military ethics, including Stoic Warriors: The Ancient Philosophy Behind the Military Mind; and The Untold War: Inside the Hearts, Minds and Souls of Our Soldiers.

How did you come to teach philosophy in the military?

Through a crisis on their part. The US Naval Academy had a cheating scandal. Back, in the 1990s, 130 electrical engineering midshipmen were implicated in cheating on a major exam. They seemed to have got it in advance. These individuals were all brought before various kinds of honour boards, and as part of the ‘moral remediation’ they wanted an ethicist onboard. That was me. After two weeks they asked me to set up an ethics course. One thing led to another, and eventually I was selected as the inaugural distinguished chair of ethics at the Naval Academy.

How did you find teaching in the military?

My dad was a WWII vet, didn’t talk about it much. I was a child of the 60s, many of my friends were conscientious objectors. Now, I was in a place where there were marines and officers who had fought on the Mekong Delta. It was an eye-opener, to see the other side of a conflict that was very formative for me. I hadn’t really met my peers who had served. I learned a lot from them.

1897893_762289500447958_1645314971_nThe Naval Academy is a different sort of university. It’s uniformed. Everyone is Ma’aming and Sir-ing. They’re trying to figure out what rank you are. They were used to a very hierarchical universe. And a lot of Navy people are engineer-focused. They want bottom lines. Discussions without clear endings, or deliberative questions without easy right and wrongs, shades of grey, all of that was not something they were comfortable with.

But you discovered they have a natural interest in Stoic philosophy.

Yes. The course took them through deliberative models and major ethical theories – Aristotle, emotions, deliberation and habits; Kant and universalizability; Mill and Bentham, and notions of maximizing utility. When we got to Stoicism – Epictetus, Marcus Aurelius – they felt ‘this is the stuff I know: suck it up, truck on, externals mean nothing to me. I can’t get back for my wedding because I’m on a ship, well, it’s beyond my control.’

One of the greatest officers in their midst was Admiral James Bond Stockdale. He’d endured seven years in the Hanoi Hilton [the north Vietnamese prison], two of them in leg-irons. He’d been given a little copy of Epictetus when studying at Stanford. He committed it to memory and it became his salvation. That’s a well-known story in the military.

You met and interviewed Stockdale several times. What was he like?

He had a kind of James Cagney voice. And you couldn’t tell when it was him talking or when he was quoting Epictetus. It was seamless. You sometimes thought you’re in front of an impersonator. He had a noticeable limp in his left leg, from when his plane crashed in Vietnam, and Epictetus also had a limp in his left leg. So there was a physical kinship and perhaps a spiritual kinship too.

Are the Stoics widely read in the US military? I came across quite a few Stoic soldiers when researching my book, particularly in the Green Berets – I didn’t come across any in the British military.

The Roman Stoics are read by officers and commanders, not so much by enlisted men. How they come to it is an interesting question. I think in the Marines and Navy, probably through Stockdale’s influence on the curriculum – he was head of the Naval War College on Rhode Island. Also these are popular writers, easy to read. Everyone understands stoic with a little s.

How useful or appropriate is Stoicism for soldiers?

51h1oiS7REL._SY344_BO1,204,203,200_It has curses and blessings. It fits an idealized model of invincibility, of external goods not mattering. I can expand the perimeter of my agency so that the only thing that matters is what I can control – namely my virtue. It meshes with what we know to be pretty natural responses to constant threat. As Stockdale once put it, you’re ‘as cagey a Stoic as you can be’. He was a cagey sage with his captors – this won’t touch me, this won’t affect me.

With that goes the notion that your emotions can be fully controlled and you can turn them off, essentially. Anything your emotions attach to in sticky and graspy ways is dangerous, because they can destabilize you, they can make you mourn and grieve. So there’s the idea of not missing something – a cigarette, your child, your spouse, or your buddy who gets blown up next to you. It’s useful armour. That’s the blessing.

The curse is it can be a way of not feeling, or as a lot of soldiers tell me, you feel ‘dead to the world’ – they can’t feel anymore. And that’s awful. You come home and you have this gorgeous child, and a family you want to adore, and you can’t even feel joy because you’ve turned off your emotions in certain ways. That is an absolute curse.

The Stoics were giving salvation for tough times. It’s a great philosophy for tough times, I’m not sure it’s a great philosophy for everyday living. It’s always good to feel more in control, but it’s not good to think that luck and the vicissitudes of the world can’t touch you or that you can’t show moral outrage, love, grief, and so on.

Do some soldiers manage to put on and take off that Stoic armour?

No, that’s really hard. This is a question about ‘resilience’ – the million-dollar-word in the military right now. The idea of resilience is you can bounce back. We have 2.4 million soldiers coming home from war. They can’t bounce back on their own. They can’t bounce back just with their families. They need a community that gets it. They need to know that we’re not just saying ‘thank you for your service’. They need enormous amounts of trust, hope, medical attention. Above all they need emotional connection.

There’s an idea in Stoicism that your loyalty to the Logos, to the ‘City of God’, comes before your loyalty to the state. The Stoics were quite individualistic, probably not great team-players. How does that fit in with the very strong collective or conformist ethos of the military? What if you’re asked to do something that doesn’t fit with your virtue?

The best service-member will never check their conscience at the door. It will be with them all the time. That’s not just Stoic. That’s any moral philosophy – you do the right thing. Your virtue is your guide. If you have an officer, a commander, who is giving you unlawful, immoral, bad advice, and it’s even part of a system – of torture for example – the moral individual will question that, whatever philosophy they have.

Major Ian Fishback

Major Ian Fishback

One of my friends is Ian Fishback, he now teaches at Westpoint and is going to do a Phd in philosophy at Michigan. He’s a special forces major. He served eight years or so in Afghanistan and Iraq. He was at Abu Ghraib and didn’t like what he saw there. He wrote at least 50 letters to command about what was going on. He got no answers. He finally wrote to Senator John McCain, who’d been a POW with Jim Stockdale, and said ‘this is what I’m seeing’. He went public. He blew the whistle. And from that came a referendum that was put before congress. To know Ian is to know that he is thoughtful. He is conscientious.

To be in the military is hard for the thinking soldier. All the people I work closely with, all my PhD students from the military – they have to accept some of the absurd of a career in the military, but you can’t accept some of the missions. You pick your battles. And it may be a career-ender. You face the possibility that you’re not going to be a yes-man.

How well is the military coping with PTSD at the moment? How big a problem is it?

the-untold-war-inside-the-hearts-minds-and-souls-of-our-soldiersWe don’t really know the numbers, but some say there’s maybe 30% incidence of PTSD in soldiers coming home. It’s a central issue which the Americans are taking on in various ways. The Pentagon, and in particular General Peter Chiarelli, wants to drop the D from PTSD. They argue it’s not a disorder, it’s an injury with an external cause. They want to destigmatize it.

Secondly, there’s vast efforts to deal with the suicide peak – for the first time in record-keeping, the rate of suicide in the military exceeds the comparable rate for young male civilians. It’s not always after multiple deployments. Often the precipitating factors have to do with coming home, with difficult family relationships at home. It’s very complex. Some would like to find a ‘biomarker’ for suicidal tendencies.

There aren’t enough mental health workers, that’s pretty clear. And there’s still stigma, still a sense that it’s weak not to be able to handle losing your buddy.

Also, traumatic stress has a moral dimension, often. It’s not just a fear symptom. It’s also that you keep going back to the situation and thinking ‘I should have done that, I wasn’t good enough, I let someone down’. It’s complicated what morality is in the complex of war. You’re in a lethality and violence-soaked environment, increasingly in population-centric environments. There’s a lot of grey area – who’s the enemy, are they a voluntary or involuntary human-shield, and so on.

I read the military isn’t doing a great job at keeping track of what treatments for PTSD actually work.

Well, Cognitive Behavioural Therapy seems to be the leader. But you’re talking about populations that are heavily medicated, on sleeping pills, on anxiety pills, on pain-killers. And that affects their ability to change their thinking.

What do you think of Martin Seligman’s Comprehensive Soldier Fitness programme [ a $180 million programme introduced in 2010 to teach resilient-thinking skills to all service-members, to try and prevent PTSD occurrence]?

This was introduced in 2009 / 2010 when the suicide rate was going up. They needed something fast. As one army psychiatrist said to me, they expected broken bodies, they didn’t expect broken minds. I think Seligman’s work has been shown to be effective in populations of children in tough neighbourhoods. He had not done previous work with combat lethality-saturated environments.

Emotional intelligence is a great thing, being able to talk about things soldiers don’t typically talk about is great. You need forums, you need lots of time. My understanding is you get two hours training twice a year when you’re not deployed. That’s not a lot.

Some military psychiatrists worry that the programme could further stigmatize those who still develop PTSD. If you’ve gone through the preventative programme and you still can’t sleep at night, you’re still racked by guilt, you may feel even worse. Prevention is one thing, but you can’t further stigmatize those who are traumatized. Still, I applaud the armed forces for realizing that mental health is critical for soldiers’ health.

You still work with soldiers now?

I have a lot of veterans enrolled in my classes in Georgetown. I’ve been working with soldiers for 20 years now. They’re my buddies. Next year I have a book coming out about soldiers coming home, called Making Peace with War: Healing the Moral Wounds of our Soldiers, which involved a lot of long interviews with soldiers. My heart goes out to folks who are trying to morally process really complicated issues.

To go back to the beginning, you initially started work with the military because of an ethical crisis, which they thought could be solved with an ethics course. Do you think ethics courses really do improve people’s ethical behaviour?

I think these courses have enormous value. Not when they have sets of right or wrong answers, but when you have small enough groups where you can have discussions. Finding time to think, when you’re not on the spot, is really powerful. It goes into the unconscious and is part of your reserves for hard times.

If you’re interested in the application of Stoicism in modern life, including the military, come to the Stoicism Today event on November 29 at Queen Mary, University of London.

The religious roots of cancerphobia

Fanny H. BrotonsFanny H. Brotons is a PhD candidate at the Institute of History of the Spanish National Research Council. Her dissertation focuses on the experience of Spanish and British cancer sufferers in the second half of the nineteenth century. In this blog post she explores how science and religion contributed to different definitions of the disease, and emotional reactions to it, including revulsion and fear, prior to the twentieth century. 

Fanny is a member of the group HIST-EX, which studies the history and philosophy of emotions and experience. During the academic year 2013-2014, she was a visiting researcher in the Department of History, Classics & Archaeology of Birkbeck, University of London. You can follow Fanny on Twitter: @fannyhbrotons


Cancer has existed as a medical diagnosis since Antiquity. The Hippocratic corpus (5th – 4th centuries BCE) coined the terms karkinos and karkinoma for referring to non-healing lumps and sores. The Roman physician Galen (2nd century CE) translated these names into the Latin word cancer and further described the disease within the long-lasting framework of humoralism. Cancer and fear have been inextricably entangled ever since, and physicians themselves are often the first to be held responsible for the terrifying diagnosis, when it comes.

Through the centuries, medical treatises defined cancer as the epitome of malignant tumours: painful, incurable, and deadly. Moreover, they often compared its course with the pattern of behaviour of an aggressive animal. It adhered to the tissues of the organism with the obstinacy of a crab seizing a prey in its claws. It recurred after the most thorough surgical excision, in the same way that some crab-like species can regenerate a paw lost in a battle. It was voracious like a wolf (lupus) or gnawing like a rodent (ulcus rodens). Most of the time, it was preferable not to touch it (noli me tangere), as irritation speeded up its dissemination within the body.

Cells

“Different form of cells that carcinoma can present”. Badía, Salvador. Del origen del cáncer en relación a su tratamiento. Est. tipográfico de Ramírez y Cia, 1876.

A major difference exists, however, between the descriptions of a fearsome disease and a dreaded illness. Despite the now classic claims of Roy Porter’s medical history from below, the existing historiography of cancer has rarely considered the point of view of the sick people and their environment before the end of the 19th century. The main argument has been that cancer was a relatively rare disease before the changes brought about by two parallel processes. On the one hand, the reconceptualization of the condition as a proliferation of abnormal cells following on from Johannes Müller studies in the late 1830s gave rise to a steady pattern of increase in its incidence and mortality (Fig. 1)On the other hand, advances in public and private hygiene, combined with better nutrition, led to a significant decline in the number of people affected by epidemic diseases (with TB as a model). As a result, cancer became increasingly visible, and subjected to collective forms of fear. (1)

As medical statistics only gained momentum in the nineteenth century, it is hard to estimate how frequently cancer was diagnosed in earlier periods. It is clear, however, that the emergence of histopathology and oncology resulted in a progressive exclusion of conditions that had been previously considered cancerous on other grounds. Prior to its definition as a disease of the cells, cancer had been a disease of the skin, or, at least, a condition with manifestations in the skin. As such, it was recurrently linked with a feeling of revulsion and a fear of contagion. While these features were shared with other inveterate skin conditions, cancer was more specifically understood within a religious framework, either as a form of biblical leprosy or as a distinct disease inheriting the same stigmatising attributes.

In the Old Testament, leprosy – or צרעת (tsara’at) in the original Hebrew text – was both a medical and a moral condition. Since Antiquity, clergymen but also physicians relied on this religious framework for describing the disease. As late as in the 18th century, the Sevillian practitioner Bonifacio Ximénez y Lorite considered leprosy as this “repugnant and disgraceful disease, dreaded by human beings, abhorred by God, whose venom disfigures, eats, and ruins the splendid machine of men and women, contaminates beasts, infests clothes, and seals horrifyingly even the houses in which its wretched sufferers live”. (2) Following the book of Leviticus, 13-14, lepers were historically confined in specific houses until their recovery or their death.

In the medico-legal instruction on leprosy that Ximénez published in 1766, he referred to successive medieval rules stressing that cases of cancer were not suitable for admission in Spanish leper houses. The persistent reference aimed at counteracting the widespread medical idea that leprosy was “a cancer of the whole body”, as the Persian physician Avicenna had originally stated in the 11th century. Neither Medieval regulations nor Xímenez’s writings, aimed at suggesting a similar clarification of the admission conditions of leper houses, were successful in removing this pervasive idea. A sovereign ordinance for the region of Andalusia issued in 1784 still stated that all the sufferers from cancer with no possibilities of being placed in isolation within their own neighbourhood had to be confined in a leper house. (3)

St Catherine

“Catherine of Siena attempts to obliterate her bodily senses by drinking a cup of pus she has squeezed from the cancerous breast sores of a sick and ungrateful woman she is tending”. In Bell, Rudolph M. Holy anorexia. Chicago: Chicago University Press, 1987.

The literature on the lives of saints also stressed the historical connections between lepers and cancer sufferers. In the 14th century, Saint Peregrine Laziosi became known as “the new Job” after having been miraculously cured of a cancerous ulcer in his leg by the touch of Jesus Christ. Saint Catherine of Siena, in turn, actualised the Lord’s compassionate command of looking after the most miserable human beings on earth –again, the lepers– through drinking a cup of pus from a cancer sufferer with the greatest devotion (Fig. 2). In the second half of the 19th century, at a time in which the majority of the European leper houses had closed, the Women of Calvary followed the example of the female saint through the creation of hospitals for cancer sufferers across France, beginning in Lyon (1842) and Paris (1874), then extending to other cities such as Bordeaux, Marseille, Saint-Étienne, Rouen, and even reaching Brussels (1886) and New York (1899). Within these establishments, popular cancerphobia encountered mystical cancerophilia as its reverse side. Mostly high-class widows, the Women of Calvary only found relief for their own suffering through a life of self-sacrifice directed towards providing loving care to the outcasts of society. (4)

The Christian roots of cancerphobia are still to be thoroughly researched. The project involves a deeper approach to the lay understanding of the disease that policy makers, charity founders and society at large possessed in medieval and modern times, considering the variability of beliefs and practices within Catholic and Protestant contexts, as well as the impact of Christian missions in European colonies. The major significance of cancer in contemporary societies invites us to explore people’s emotions in the past in order to better understand our present.

 


 

References:

(1) Classic contributions to the history of cancerphobia since the turn of the 19th century include: Sontag, Susan. Illness as metaphor. New York:Farrar, Straus & Giroux, 1978; Patterson, James T. The dreaded disease: cancer and modern American culture. Cambridge and London, Harvard University Press, 1987; Pinell, Patrice. The fight against cancer: France 1890-1940. New York: Routledge, 2002 (French original version: 1992); Darmon, Pierre. Les cellules folles: l’homme face au cancer de l’Antiquité à nos jours. Paris, Plon, 1993. For more recent accounts focusing on Early Modern Britain and France, see, respectively: Kaartinen, Marjo. Breast cancer in the eighteenth century. London: Pickering & Chatto, 2013; and Moscoso, Javier. “Exquisite and lingering pains: facing cancer in Early Modern Europe”, in Rob Boddice (ed.), Pain and emotion in Modern history. London: Palgrave, 2014.

(2) Ximénez y Lorite, Bonifacio. “Instrucción medico-legal sobre la lepra, para servir a los Reales Hospitales de San Lázaro”. In Memorias académicas de la Real Sociedad de Medicina y demás Ciencias de Sevilla: extracto de las obras y observaciones presentadas en ella. Vol. 1. Seville: Printing house of Francisco Sánchez Reciente, 1766.

(3) Gazeta de Madrid, 1784, Nº 38 (11th May), pp.10-11

(4) Camp, Maxime (du). “Les Dames du Calvaire”. In La charité privée à Paris. Paris: Librairie Hachette et Cie., 1885, pp.213-271.

The Smile Revolution

Colin JonesColin Jones is Professor of History at Queen Mary University of London, where he is one of the founding members of the Centre for the History of the Emotions.

This post, which first appeared on the Voltaire Foundation blog, marks the recent publication of his book The Smile Revolution in Eighteenth-Century Paris (Oxford University Press, 2014)


Portrait of Isabelle de Charrière by Maurice Quentin de la Tour, 1766 (WikiArt)

‘What can one say of a person who has suffered so much with heroic courage… the most horrible pains in the mouth, in the neck and on the brain; and who after nearly fifteen months spent peacefully without any suffering now despairs that her teeth, which look beautiful are not good at all; and who at every moment thinks she will lose them; who dreams of this at night; who looks at them a hundred times a day; who imagines one is good for nothing when one does not have perfect teeth; and who is amazed at the thought of finding friends, lovers, a husband…’

(Isabelle de Charrière to Constant d’Hermenches, 6 May 1765)

The hysterical despair about the state of her mouth expressed by the Swiss-Dutch writer, 25-year-old Isabelle de Charrière, was a not uncommon Enlightenment reaction. With the entry of sugar into elite and even popular diet over the course of the eighteenth century, toothache could claim to be the mal du siècle. This was all the more anxiety-producing because the smile was becoming in the public sphere the badge of relaxed unstuffy sociability and of healthy virtue. And the new smile of sensibility featured white teeth. Rousseau’s Julie and Samuel Richardson’s Clarissa had shown how it should look. So, more graphically, did Madame Vigée Le Brun: her white tooth smiling portrait displayed at the Salon in 1787 (and still viewable in the Louvre in our own day) caused something of a rumpus in the stuffy art establishment.

Elisabeth-Louise Vigée-Le Brun

Elisabeth-Louise Vigée-Le Brun: self-portrait with her daughter, Jeanne-Lucie (The Musée du Louvre)

As I show in my book, The Smile Revolution in Eighteenth-Century Paris, the emergence of the smile of sensibility owed something to scientific innovation as well as to cultural trends. Modern dentistry emerged at precisely this time, with Paris as its most brilliant champion. The crude tooth-puller of yore now gave way to the dental surgeon who focused on tooth conservation rather than extraction.

New technologies of tooth maintenance and beautification emerged too, not least the humble toothbrush, which offered individuals a way of keeping Isabelle de Charrière’s nightmare at bay. A toothbrush was soon to be found in the nécessaire of every woman of sensibility, and many a man of feeling too.

An eighteenth-century horsehair toothbrush.

A ‘Smile Revolution’ appeared to be in the offing in late eighteenth-century Paris. It would take the Revolution of 1789 – and particularly the Terror – to destroy it. Despite this initial outing, the white tooth smile would only conquer western civilisation in the twentieth century.


You can download and read the whole Introduction of Colin’s book in PDF form via the OUP website. 


Further reading: 

C. P. Courtney, Isabelle de Charrière (Belle de Zuylen).

Isabelle de Charrière, brilliant letter-writer and gifted novelist, is now recognised as one of the most fascinating literary figures of her time. In this lively and comprehensive biography, Cecil Courtney chronicles her life by making full use of the original sources, notably Belle’s extensive correspondence with many of the leading figures of her time.

Smelling the past

Jonathan Reinarz, Past Scents: Historical Perspectives on Smell (Urbana, Chicago, and Springfield, IL: University of Illinois Press, 2014)

Reviewed for the History of Emotions Blog by Catherine Maxwell, Professor of Victorian Literature in the School of English and Drama, Queen Mary University of London.

rein2Jonathan Reinarz’s Past Scents is a very welcome addition to the burgeoning field of research on the history of the senses and, in particular, the emergent category of olfactory studies. Long classed as a lower ‘animal’ sense as opposed to the higher ‘intellectual’ senses like vision and hearing, smell as a topic of historical study received little serious attention until the 1980s. This changed in 1982 with the publication of Alain Corbin’s massively influential Le Miasme et la jonquille: l’odorat et l’imaginaire social, XVIII-XIXe siècles, a wide-ranging study of the social significance of smell in eighteenth and nineteenth-century France, published in English in corps_fiche_corbin_couv_1501986 as The Foul and the Fragrant. In 1994, Aroma: A Cultural History of Smell by Constance Classen, David Howes, and Antony Synott offered the first comprehensive exploration of the cultural role of odours in Western history and examined olfaction in a variety of non-Western cultures. Both these seminal works, along with Patrick Süskind’s best-selling novel Perfume: The Story of a Murder (1986), were crucial in helping arouse interest in the many ways in which smell informs identity and culture.

Since then, and especially during the last decade, work on olfaction has dramatically increased with a current swell of interest in university Literature and History departments. Much of this on-going research – a significant amount being the work of younger academics or PhD students – is as yet unpublished but will likely enrich the field over the next few years. In his Acknowledgements, Reinarz assiduously names other olfactory researchers, some of whom have since brought out work that could not be referenced by him. One example is Victoria Henshaw’s Urban Smellscapes (Routledge, 2014), which regrettably appeared too late to feature in his chapter on smell and the city.

Nonetheless this timely book will be of great interest and a valuable resource to anyone contemplating work on the cultural or historical significance of olfaction or simply wishing to explore the topic further. Reinarz, a medical historian and Director of the History of Medicine Unit at Birmingham University, gives a lucid, well-documented overview of the field, providing serious appraisals of the available scholarly literature. His book opens with a good concise introductory overview of (mainly negative) historical perspectives on smell from classical antiquity onwards, a brief account of how various commentators understood the function of smell and classified odours, and an outline of his own project – six substantial chapters examining religion and smell, the perfume trade, and smell considered in relation to race, gender, class, and the city.

Much of Reinarz’s absorbing study deals with smell as a means of identity and differentiation, marking out one group from another: ‘Christian from the heathen, […] blacks from whites, women from men, virgins from harlots, artisans from aristocracy’ (p. 18). Chapter 1 on ‘sacred scents’ concentrates on ancient Christianity and the increasingly important role of scent in religious practices. Here Reinarz draws on the work of scholars like Susan Harvey to show how ‘smell became a key component in the formulation of Christian knowledge’ and how later ‘Aromatics enveloped every Christian home, shrine, tomb, church, pilgrimage site, and monastic cell and transformed these terrestrial places into ceremonial places’ (p. 27).

A nineteenth-century illustration of the camphor tree (Wellcome Images).

A nineteenth-century illustration of the camphor tree (Wellcome Images).

Chapter 2 examines the cultivation and production of perfume ingredients and their global trade from earliest times, along with scent manufacture up to and including the emergence of the modern perfume industry. It features a brief treatment of frankincense, myrrh, and camphor, ingredients considered especially valuable and desirable at specific times in perfume history. Chapter 3 on race turns more specifically to the issue of identity and, drawing on older sources and more recent work by historians and anthropologists, explores the way ‘the [racial] “other” has been defined as smelling different and, almost invariably, unpleasant’, as well as ‘the rich olfactory cultures beyond the Global North’ (p. 21). This chapter is enhanced by considerations of how diet is perceived to determine racial identity through odour – with specific racial groups despised as ‘stinking’. More positively it also discusses native peoples who use smell and odour in nuanced ways to conceptualise time and the human life cycle.

An illustration of the brewing process, published in Munich in 1884 (Wellcome Images).

The brewing process, Munich 1884 (Wellcome Images).

Chapter 4 traces how at different historical moments different kinds of women – virgins, virtuous women, whores, adulteresses, and witches – were supposedly identifiable by their characteristic smell, and also the historically or culturally variable positive and negative associations of perfume when used by women for adornment and seduction. Chapter 5 considers how ‘References to smell are intended to put people in their proper social place’ (p. 22). It surveys different historical standards of cleanliness, perfumed luxury, and more specifically the bad smell typically associated with the poor. Concluding with a focus on the Victorian period when the poor and working class were often believed to be oblivious to bad smells, Reinarz shows how working-class trades such as malting and brewing required trained noses to maintain rigorous quality control.

The final chapter on the smells of the city investigates historical perceptions of miasma (bad-smelling air thought to cause disease), and documents how the characteristic odours of the city became regulated by sanitary and hygiene reforms – key examples being nineteenth-century London and Paris. It also notes the increasing and still prevalent ‘sanitarian’ tendency to deodorise public spaces so that the city loses its ‘memories’ and its soul. Reinarz concludes his book with a brief reflection on the significance of the topics covered, current research, and the observation that ‘in the fields of the humanities and social sciences, [smell] has only begun to show its potential to open vast territories of exploration’ (p. 218).

This is primarily a broad survey study that synthesises existing scholarship and thus predominantly summarises and reflects on the works of others who are specialists in their particular fields of enquiry, this assuring its value as work of reference. Reinarz gives due credit to his most important sources  – not only Corbin and Classen & Co. who are repeatedly mentioned, but also Annick le Guérer on smell in the philosophical tradition (1992), Susan Harvey on scent in early Christianity (2006), Christian Woolgar on smell in late medieval England (2006), Holly Dugan on perfume in the Renaissance (2011), Mark Smith on race and smell (2006), Janice Carlisle on class and smell (2004), Geoffrey Jones on the beauty industry (2010), Nigel Groom on frankincense and myrrh (1981), and Jim Drobnick’s The Smell Culture Reader (2006) – to mention some of the most frequently cited authorities. Inevitably, in areas where research is still in its early stages, this can lead to dependence on a small number of sources so that, for example, the chapter on religion leans heavily on paraphrase of Classen, Harvey, and Woolgar. This is not to diminish its usefulness in showcasing major scholarship on an important, previously neglected topic, although one suspects that the chapters on class and the city in which Reinarz was able to draw on his own historical expertise with regard to smell and Victorian culture may have afforded him more direct satisfaction. Writing a book of this kind necessarily demands a selflessness that many academics would balk at. A judicious mediating presence, Reinarz does a commendable job in representing and evaluating the current state of the discipline.

Clearly a work of this kind cannot be expected to cover everything. From my own perspective I would have liked more focus on the figure of the olfactif, the individual with a refined sense of smell, and also positive emotions connected with smells, especially fragrant smells, such as joy and pleasure. Reinarz rather dutifully rehearses factual information about the history of the modern perfume industry but neglects to say anything about the large community of perfume lovers (perfumistas) who compulsively blog and chat about their obsession with contemporary and vintage fragrances on the many dedicated internet sites. Writing about one’s own perception of fragrance, an activity that commonly draws on a highly-coloured, often elaborate descriptive and affective language, is now an internet staple, influenced in part by key figures like Luca Turin and Tania Sanchez whose best-selling perfume guide has popularised witty eloquent ways of declaring one’s perfume preferences. The related phenomenon of the modern perfume memoir (Denyse Beaulieu 2012, Alyssa Harad 2012) and the growth of organised perfume events such as seminars and workshops also bear witness to the increasing need of perfume lovers to educate their tastes and express and share their passion.

Historic perfume bottles held at the Osmothèque, Versailles.

Historic perfume bottles held at the Osmothèque, Versailles.

Fashion and trends in perfumery tend to echo and reinforce wider historical changes, so more acknowledgement of this by Reinarz would have been good. The modern phenomenon of niche perfumery and specialist outlets for customers who want to buy more unusual, artisanal perfumes rather than commercial brands would also have provided an opportunity for contemporary reflections on luxury, status, and difference. Reinarz mentions museums of perfumery in Grasse and Barcelona (p. 76), but neglects the Osmothèque at Versailles, the world’s most significant perfume museum with its precious holdings of vintage perfume where visitors can smell careful recreations of fragrances no longer available. Reinarz declares that odours cannot ‘be stored over time, like most other artifacts unearthed in archives’ (p. 6) While this may be true of many odorous substances exposed to the air, experts now acknowledge that, carefully stoppered and stored, perfume can be preserved for well over a hundred years.

But these are merely one researcher’s cavils about what is an extremely competent and insightful survey of the history of olfaction and, moreover, a book that will be required reading for anyone wishing to explore the social significance of smell at different historical moments.

 

In search of transcendence, with Norman E. Rosenthal

photo-1Norman E. Rosenthal is clinical professor of psychiatry at Georgetown University School of Medicine. He is best known for having discovered Seasonal Affective Disorder and how to treat it with light therapy. He recently visited the Centre for the History of Emotions to be interviewed by his friend, Professor Tilli Tansey, for her Wellcome Witnesses of Contemporary Medicine series (that’s them both on the right). During his visit, I got the chance to interview Norm about his love of Transcendental Meditation (TM), which he wrote about in his book Transcendence (2012).

I’ve never practiced TM myself, but am interested in it as part of my research into ecstatic experiences in modern culture. Ever since the Beatles went to India to learn how to meditate from the founder of TM, Maharishi Mahesh Yogi, it has attracted many western devotees, including Martin Scorcese, David Lynch, Oprah Winfrey, William Hague, and Russell Brand (who calls Norm ‘a cosmonaut of consciousness’). But is it more than just a celebrity fad? Does it have potential as a public health intervention? What exactly is ‘transcendence’?

How has TM helped you, Norm?

ADOMB-1I’ve been doing TM for seven years. It’s helped me in a lot of ways. The first wave of help came in the form of stress reduction. I don’t sweat the small stuff as much as I used to. Things seem to go more easily – it’s like you’re swimming through the ocean of life with less friction. At the second level, the effects of the transcendental state of consciousness enter your daily life. At that point, various changes occur. There’s a tendency to be kinder to other people and to oneself. There’s a tendency to feel more connected to a larger universe, and less focused on one’s own ego. The fear of death diminishes, worries about things I can’t control are less. And sometimes there can be just great joy, without any real good reason for it. Little things seem more joyful.

How does one practice TM?

Traditionally, TM has been taught one-on-one, by people who are well trained. In that regard it’s different from mindfulness, where there is a sense you can learn it from a book. Here, it’s more like an Eastern tradition of a master passing down a technique to a student, like martial arts or other practices that are handed down.

Then you get given your own mantra and you say it out loud?

You’re given a mantra but you don’t say it repeatedly. You’re taught how to think it in a certain way. People who think the mantra has some magic significance – it may do or it may not, I don’t know – but the way we use the mantra in our minds, that’s something a seasoned teacher can teach you. Each of our minds work differently, and we might have impediments to using the mantra in the best way. That’s where a teacher can help you.

Maharishi Maresh Yogi and the Beatles in 1968

Maharishi Maresh Yogi and the Beatles in 1968

Could you use any word as a mantra?

It would have to be studied. The people who have brought TM to the West took it from an ancient tradition, where over thousands of years people developed certain words they found to be conducive to a soothing effect or a shifting consciousness. Whether another word would be just as good – that would have to be tested. As far as I’m concerned, I would rather use the words that have been used historically, and do it in the way that it’s been taught. I want to embrace the technique in its purest form as it’s evolved, because that’s the form most likely to produce the effects that I’m seeking.

The words are from the Vedas?

Yes, I believe so, they’re Vedic in origin.

That makes me hesitant. I’d want to know what I’m invoking!

Well, these words, as best as I can understand, don’t actually have a meaning per se. I’m not worried personally that I’m invoking some heathen God or strange spirit that’s going to trouble me. I see the word simply as that, a word. I know that some orthodox Jews have raised this question, but others have figured out a way of not letting it perturb them. The nice thing to me about TM is it doesn’t ask you to buy into any cosmology, any higher power. So to my mind, it would be theoretically compatible with any religion and with no religion.

In your book, you often talk about people achieving ‘transcendence’ in meditation. What is this state?

A problem with that question is that it implies transcendence is always one thing. It can be many different things. It could be going into some kind of space that you’re barely aware of  and then emerging twenty minutes later, and if someone asked you what that experience was like you’d be hard pushed to describe it. Or it could be a state of consciousness where you are alert but very relaxed and restful, where you lose a sense of boundaries and space and time, and it’s very pleasant. That’s what a lot of us mean, but it can be very different things from session to session.

Is there a risk in TM that ‘transcendence’ become a goal, which people become attached to?

Your teacher will make it clear that this is not the goal, and if you set this goal, you’re unlikely to achieve it.

Because reading your book makes me want to achieve transcendence.

The masters make it clear that you meditate to live, you don’t live to meditate. It’s the quality of your life you’re seeking to alter, not the quality of your consciousness. It’s nice if you have a pleasant experience, and reinforcing, and probably one of the things that makes transcendence work. But those who are steeped in the tradition will de-emphasize the consciousness-seeking aspects of it.

In your book you mention research on how people used to sleep in two sleeps, in between which they sometimes experienced a moment of wakeful clarity, sometimes known as ‘The Watch’, which you compare to the transcendent meditative state.

Yes, my colleague at the National Institute of Mental Health, Dr Thomas Wehr, carried out research on sleep in extended periods of darkness. Participants said their sleep was divided into two cycles, and in between they experienced a period of calm attentiveness, of crystal-clear consciousness. It wasn’t the sort of dysphoric description of insomnia that you often hear. It was a pleasant experience of restfulness that sounds very similar to TM. It could be that in TM we’re accessing that experience which we lost when we compressed our sleep into one cycle, with the advent of electric light. It could be like a missing nutrient in consciousness.

What are some of the physical changes which TM practice leads to?

There are some changes which occur in the actual meditation session itself, like slowing of the breathing, changes in the EEG, more alpha waves in the front of the brain, more brain-wave coherence. But probably of greater consequence are the long-term physiological changes, particularly the lowering of blood pressure. and how that plays out in terms of reduced risk of strokes and heart-attacks. Quite impressive statistics actually. Probably what’s happening is that, as you get upset throughout the day, you have spikes in your blood pressure. TM provides a sort of surge protector to these spikes.

TM in some ways is a private organization, a trade marked technique which is not cheap, costing something like £400 – £600 to be trained in it [although there are some subsidies available apparently]. Does that limit its potential as a public health intervention?

Screen+Shot+2013-09-04+at+2.15.00+PMFirstly, it’s a not-for-profit. Its books are open for inspection. I don’t see anyone getting particularly wealthy from it. I understand their rationale for wanting to maintain quality control and not seeing the technique diluted by having it just out there. I’ve seen really wonderful results from experienced teachers sitting and working with individuals over time. I respect the decisions that they’ve made. I understand that these days when we get free information off the web, it could feel alien to pay for it. But it’s certainly worked for a lot of people I’ve seen. Professionals charge for their time and I have no problem with them doing that as well.

There seems to be a rising interest in academia in ‘contemplative studies’, both in the sciences and the humanities. Is that something you’ve noticed?

I haven’t tracked academic trends, but I certainly think it ought to be of great interest. When you have a technique that alters consciousness and which has huge physiological and medical impact, at a time when – to quote Wordsworth – ‘getting and spending we lay waste our powers’, there ought to be interest in it.

A lot of the studies for TM come from the Maharishi University of Management in Vedic City. Could it have a confirmation bias?

Yes, but a lot of those studies were funded by the National Institute of Health, published in peer-reviewed journals of the first order. Having been in academia for a long time myself, I’d say more or less everyone has a vested interest in confirming what they study. I wish there were more studies that are completely independent, that would be nice.

In the UK, I believe a Maharishi free school has been started. What if a child didn’t want to study TM?

Nobody is forcing anybody to go to that school. It’s a speciality school, which is likely to attract people who find the TM element interesting. In the Bay area of the US, there’s been something called the Quiet Time Programme – they were public schools. Children whose parents disagreed with it were perfectly at liberty not to participate. They did that Programme at some of the worst schools, with marked results. But no one should be forced to practice it. It’s not mainstream like reading and writing, and shouldn’t be required.

You interviewed some of my creative heroes – Martin Scorcese, David Lynch and others – who practice TM. I wonder if creativity sometimes comes from dark and unresolved places in us. Could TM be bad for creativity?

David Lynch and Russell Brand

David Lynch and Russell Brand

I’ve heard David Lynch respond to that question many times. He says there’s a myth of the artist suffering alone in his garret, and that’s a myth mainly used by French men to get girls. If you’re suffering and emotionally unwell you’re less likely to create something than if you’re well. That doesn’t mean you haven’t visited some dark places, but if you’re in the middle of the Dark Night of the Soul, you’re less likely to create than if you’re in a better place.

Are there any side-effects to TM?

There are side-effects to pretty much anything. Some people get side-effects from drinking too much water. And some people might get side-effects from TM, like it might disturb your sleep at the beginning. But on the whole it’s very safe.

Has TM changed your view of religion or spirituality?

I would say that it’s made me more spiritual. The issue of a higher power or life after death are not issues that concern me very much. But the issue of belonging to something greater than myself is a nice concept that I feel greater kinship with than I did previously.

I came away from reading Norm’s book sold on the mental and physiological effects of the ‘technique’, but unsure whether one really could chant a holy word from the Vedas for 40 minutes each day if one was, say, a Christian or Muslim or indeed a secular humanist. Perhaps Christians should practice the ‘hesychast‘ technique instead – it’s an ancient Orthodox practice which involves the daily chanting of the Jesus Prayer. But there aren’t many practitioners of it in the west who could guide learners – there are some in the World Community of Christian Meditation.

A Christian would probably baulk at the instrumentalization of prayer for health benefits. On the other hand, a TM devotee might say the real aim of TM is the spiritual transformation of one’s self and society, and the health benefits are just a canny way to ‘sell it’ to a skeptical and secular audience. And is the desire for transcendence necessarily selfish? Some Christian mystics – such as the 17th century English mystic Thomas Traherne – would say experiencing the bliss of one’s deep consciousness is an essential part of finding God within and turning away from our addiction to the senses.

SuperRadianceMondaySome of the TM movement’s claims, like the claim that if 1% of the global population practiced TM, war would cease, seem to me over-optimistic (though no doubt the world would be somewhat improved). Maharishi tried to gather 2000 ‘pandits’ to chant continuously for this aim, and his organization is still trying: there are reports that Vedic City in Iowa (the global headquarters of TM) is now importing Indians and paying them $200 a month to live in a compound and chant continuously until the ‘Super Radiance effect’ takes place and world peace is automatically achieved. Yes, even global transcendence can be out-sourced to cheap Indian labour these days!

I wonder if the development of a technique for ‘automatic self-transcending’ which doesn’t require any ethical or metaphysical beliefs is a bit instrumentalized – do major ethical changes really happen all of their own, just through the daily practice of 40 minutes of meditation? Is it so easy to tame the selfish ego? It sounds very attractive, but maybe too good to be true…

On the other hand, Jesus said you can tell the tree by its fruits – Norm himself seems a very warm, kind person, who believes he’s been greatly helped by TM, as do other credible people. And the various NIH trials of TM also seem to show genuine physiological and emotional benefits (even though, ultimately, I wouldn’t want to meditate just to lower my blood pressure).

I am curious, finally, as to what conclusions we can draw about the self and nature from deep meditative practices like TM. What does it mean if, when we sit quietly and let our mind be still, we discover a place of joy and light within us? What does that say about the mind, and what does it say about the nature which created it? 

Norm’s latest book, by the way, is The Gift of Adversity: The Unexpected Benefits of LIfe’s Difficulties, Setbacks and Imperfections. 

Paul Burstow on why the politics of well-being is not illiberal

Paul Burstow MP was formerly the minister of state for care services, and is head of a commission on mental health at the liberal think-tank Centre:Forum. That commission has just brought out its report, calling for various policies as part of a National Well-Being Programme. Here he tells the Centre about the proposals.

Why do we need a National Well-Being Programme?

If you look at what drives well-being and what the biggest cause of human misery is in our society, it’s mental illness. And we know, because the evidence is very robust, that there are many things we can do at a very early stage to reduce the number of people falling ill in the first place, and many more things we can do to promote a recovery, so people can function well even with a mental health problem. So it’s both reducing incidence and promoting recovery that led the Commission to argue very strongly that we need a service that helps people not just when they’re sick, but that actively promotes well-being.

Do you think there is room for improvement, then?

We’re still light-years away from where we need to be societally. We still have huge problems of stigma and prejudice. It’s very obvious in the workplace where many people are very fearful of discussing mental illness with work colleagues or a manager, and that sort of attitude pervades all aspects of our society. It affects the way the NHS takes its decisions, and priorities physical health over mental health. That sort of institutional bias is a very pervasive one.

Paul Burstow MP

Paul Burstow MP

In terms of focusing on early interventions, what would you do for children’s well-being?

We’re backing something that the former Cabinet Secretary Sir Gus O’Donnell proposed in his recent report on well-being – we need to build well-being into a whole-school approach, so things around life-skills, values and character are an integral part of the curriculum. That can make a profound difference over the course of a child’s development, and help them to become more resilient – the more resilient a person is, the more likely they can cope with adversity.

Do you mean have a particular subject in the curriculum dedicated to well-being?

We think that precisely how this is approached ought to be a matter for the school itself. We’re not advocating a new subject dedicated to well-being. But clearly there are things around life-skills which would be relevant to Personal, Social and Health Education (PSHE) and other areas. We don’t want a one-size-fits-all approach. Schools should have an overall responsibility for promoting well-being. OFSTED has a role in assuring the public that schools have that role – that’s something that existed before and then was taken away from OSTED’s role, we think it should be reintroduced.

SEAL: not a huge success

SEAL: not a huge success

In the Noughties, schools had a commitment to teach Social and Emotional Aspects of Learning, but a lot of teachers felt they didn’t have the training to teach it, so either schools didn’t bother teaching it, or they did so but with some reluctance and awkwardness.

That’s a fair point, and that’s why we make recommendations about teacher training. We need to see a return to the historical position where teachers were given training about children’s development. That was the case until the 1980s. Teachers need to be better equipped to identify mental health issues and to sign-post support services.

One of the big things you’ve called for is parity in funding in the NHS for mental illnesses as for physical illnesses. Is that right?

As a proxy for this, we should at least be looking to match spending to the burden of disease. About 23% of all disability is associated with mental health. At the moment the NHS spends £10-13 billion on mental health services. We’re advocated prioritized growth in the NHS budget for mental health services. Each year, it should receive an increase in investment of about £1 billion, in order to close the massive treatment gap in primary care for depression and anxiety. It usually goes untreated.

Where would that money go? More money for talking therapies?

It would go into delivering access standards; into delivering access to psychological therapies, particularly for young people. We know the lifelong benefits for early intervention are very significant indeed. It would also be about looking at improving the offer for perenatal mental health – there’s surprisingly little help available for dealing with maternal depression, which can have a big impact.

So what is Public Health England’s role in the National Well-being Programme?

MindfulnessPHE are already piloting national social marketing to promote the Five Steps to Well-Being, which are a good way of modelling this. We also think they have a role in supporting Health and Well-Being Boards, both in commissioning well-being services, and in making sure they use what they already have in the community. In the report, we note the evidence from the Liverpool Public Health Observatory, which examined all sorts of community and peer-support schemes, and showed the benefits they bring. These community services should be more visible.

Ian Walton, a GP and part of the Sandwell Clinical Commissioning Group, spoke about Sandwell’s experience at our report launch – they’ve started something called the Sandwell Well-Being Hub which maps some of these community services, so GPs can tell people what’s available and help them get in touch. It’s very much about enabling GPs to tap into the existing strengths and assets of communities.

And would that also mean helping those community groups get support and funding?

That was one of the points Ian Walton made. By getting Health and Well-Being Boards to engage with this agenda, mapping what’s already there….some of these groups are very small and informal, and may need a place to meet, for example. So it may be about being a bit smarter in terms of providing premises.

What about Improving Access for Psychological Therapies (IAPT), the government’s flagship talking therapy programme. Is that getting enough funding and political support?

Ask-for-IAPTThe adult and children IAPT services need to continue. But access needs to be widened – only 15% of those who could benefit are getting access. That needs to significantly increase. And too many IAPT services only provide Cognitive Behavioural Therapy. The improvement plan for IAPT back in 2011 said there would be a widening into other modalities like family systemic therapy. That needs to happen.

Why, if the government has been spending a lot more on talking therapies for the last five years, is spending on anti-depressants still going up and up?

Basically because we’re still not matching supply to demand in terms of the number of people with depression, anxiety and other mental illnesses, and IAPT referral isn’t always possible – expanding those services is vital.

Considering all the formal and informal initiatives to improve well-being over the last 30 years, from the massive expansion of anti-depressant use, to the increase in talking therapies – none of that seems to have impacted our national well-being level, which has stayed stubbornly flat since the 1950s. That’s perhaps because when you aggregate up to the national level, only major societal collapses seem to have much statistical impact. Given that, is it practical for governments to set national well-being targets?

If you were to set it as a national target, then no. It’s important to collect that data, it’s important to look at it. What we say in our report is we need more granular and local statistics, to help guide Health and Well-Being Boards and to measure the impact of services. The other thing we published yesterday was the UK’s first ever Atlas of Variation in terms of factors that underlie good or bad mental health – it shows stark variations.

What can government do to improve well-being-at-work?

wpsupportIt can take a lead in improving practice within public sector organizations. There are a number of available frameworks, such as Mindful Employer, which uses mindfulness; there’s the MIND mental health-friendly employer framework. So public sector organizations could use them, and government could use procurement processes to set standards.

Secondly, we think the work programme needs to change. It’s been a failure when it comes to people with mental health problems. Only 5000 of the 125000 who have been supported into work by the work programme have mental health problems, yet we know that some individual placement support schemes work. Employment should be seen as a clinical outcome. We also think that Access to Work could play a bigger part – only a very small number of people with mental health problems get any support to enable them to carry on at work or return to work.

The president of the Faculty of Public Health recently said we should move to four-day working weeks to reduce stress. What do you think?

It’s not about the quantity of time, it’s the quality of the job. So we wouldn’t support that.

What about more online resources? Both IAPT and Public Health England are trying to improve people’s psycho-education, to help them learn how to take care of themselves. But I don’t see much in the way of really good public health sites or even Massive Open Online Courses, that could bring mental health education to life in an entertaining and engaging way.

It’s a good question. Public Health England has a leadership role in this area, and is currently piloting some stuff.

Two final questions on the politics of well-being. Firstly, some people feel the government is overstepping the proper bounds of the liberal state when it tries to tell us how to be happy. What do you think?

John Stuart Mill was one liberal philosopher who pondered the balance between wisdom and personal autonomy.

John Stuart Mill was one liberal philosopher who pondered the balance between wisdom and personal autonomy.

If the intention was to tell people how to be happy, that would be overstepping. That’s not what this is about. When I was care minister and responsible for the Care Act, that Act has as its first organizing principle the notion of individual well-being. At the heart of that is the idea that the individual is the best judge of what their own well-being consists of. Well-being is a very liberal concept – it’s not about telling people what they must do to promote their own happiness, but it is about recognizing barriers to well-being. The state has a role in terms of removing barriers and providing education to help people realize their well-being.

But there are perhaps broader cultural issues underlying the amount of depression and anxiety in our society, which the NHS is not well-placed to handle – I mean a general loss of a sense of meaning or purpose, and a decline in the sense of community. Over the last century or more, we’ve seen the decline of two grand narratives from which people drew meaning and community – religion, and political party or trade union membership. The modern consumer state seems good at providing short-term happiness, but not so good at giving people a more long-term sense of meaning, purpose and community. And that’s not something the NHS can really tackle, is it?

Well, that’s not an easy last question. What I’d say about that is, the NHS can’t do it all. It’s a reflection of the society it’s in. One of the things that good primary care is about is not just seeing the world through a medical lens. It’s about seeing the broader picture of the whole person and the community they’re in. We talk in the report about the role of primary care changing, and GPs having a role in mental and behavioural health, and linking individuals up to community services. It’s not about the state or GPs telling people what should be provided, but also tapping into what’s already there.

And how can GPs keep quality control of community groups. Some of them might be culty or extremist or teaching people things that aren’t good for them. How do you keep quality control?

Sandwell is a good example – they keep a hub which enables GPs to respond to concerns. It’s about providing information and then letting people choose what they take up. The state can’t be the agency that gives the seal of approval to every community group out there, especially ones not funded by the state.

Philosophies for Life: the results of the pilot

11-10-13-mwc-1-1This year I’ve developed and trialled an eight-part course in practical philosophy, called Philosophies for Life. The pilot was financed by the Arts and Humanities Research Council via Queen Mary, University of London. I trialled the course with three partner organizations: Saracens rugby club; New College Lanarkshire and HMP Low Moss prison; and Manor Gardens mental health charity.

The results were very positive – the coaches of Saracens said the philosophy club was ‘the most popular thing we’ve done this season’; the participants at Manor Gardens philosophy club reported feeling more socially supported, more capable of coping with adversity, and much more interested in philosophy. And the participants of the prison philosophy club said they found the club more enjoyable and useful than the prison’s CBT courses, and became more interested in philosophy as a result.

I now plan to launch commercially, working with businesses, NHS mental health services and other organizations, and also developing an online course for the retail market. Continue reading