‘Reputation is the modern purgatory’

As part of #shameweek, I want to sketch a theory of shame’s crucial place in liberalism and in critiques of liberalism.

Secular liberalism, which was born in Athens in the fifth century BC, replaced the gods of Olympus with the god of Public Opinion. According to the fifth century philosopher Protagoras, who is perhaps the father of liberal philosophy, what drives us to obey the law and fit in with the manners of civilisation, is not fear of divine punishment, but rather our natural sense of shame and justice. These are the sentiments that enable us to live together in cities. Shame and the desire for public approval are the bedrock of liberal civilisation.

What really matters to humans, according to Protagoras, is not what the gods think of us (who knows if the gods really exist or not) but what other people think of us. ‘Man is the measure of all things’, he claimed, therefore the measure of our true worth is our public standing. This radical idea, which is the kernel of liberalism, introduces a new volatility and insecurity into public life. The old caste divisions are less certain, more fluid. To rise to the top of society, all one needs are the rhetorical and PR skills to win the attention and the approval of the public, and to avoid their censure. One’s ascent could be swift, but so could one’s fall.

Cicero, for example, manages to rise into the senate class of the Roman Republic on the wind of his rhetorical ability, and his ability to network, win friends and influence people. He is the archetypal liberal, deeply driven by the desire for public approval, and at the same time, wracked with the fear of making a fool of himself in front of the public (we hear a description of how Cicero suffered what sounds like a panic attack once when giving a major speech, and he says in De Oratore that the better the orator, the more terrified he is of public speaking).

It is interesting, in this respect, that the first recorded instance of social anxiety is during the Athenian enlightenment, at the birth of liberalism, at the very moment when the public is being deified into an all-powerful god. We read in Hippocrates of a man who ‘through bashfulness, suspicion, and timorousness, will not be seen abroad; loves darkness as life and cannot endure the light or to sit in lightsome places; his hat still in his eyes, he will neither see, nor be seen by his good will. He dare not come in company for fear he should be misused, disgraced, overshoot himself in gesture or speeches, or be sick; he thinks every man observes him’. There is, I suggest, a profound connection between liberalism’s deification of public opinion and the terror of making a fool of oneself in public.

Later theorists of liberalism built their ethics on the same natural foundation of our desire for status and our fear of shame and humiliation. Adam Smith, in particular, built his Theory of Moral Sentiments on this idea that humans’ over-riding drive is to look good to others, and to avoid looking bad. We imagine how our actions look to an ‘impartial spectator’, we internalise this spectator, and conduct our lives permanently in its gaze – and that’s what keeps us honest, polite and industrious. We constantly perform to an audience – in fact, his ethics are full of examples taken from the theatre. He constantly asks himself what looks good on the stage, what wins applause. Morality has become a theatrical performance.

Tatler, still going 300 years after it was founded

It’s a similar ethics as one finds in Joseph Addison’s Spectator and Tatler essays, where he imagines a ‘court of honour‘ that judges the behaviour of various urbanites: this person snubbed me in the street, that person behaved abominably in the coffee-house, and so on. The All-Seeing God is replaced by the thousand-eyed Argus of the Public, which spies into every part of your behaviour, judges you, and then gossips. Unsurprising, then, that Mr Spectator himself should be a shy, self-conscious, retiring character – we long to observe the foibles of others, yet are frightened of our own foibles being found out.

This liberal, Whiggish ethics  celebrates the city, because the city is where we are most watched, most commented upon, and therefore where we are most moral. The city makes us polite (from the Greek polis) and urbane (from the Latin urbs). It polishes off our rustic edges and makes us well-mannered. Liberal ethics also celebrates commerce and finance, for the same reason. The man of business must carefully protect their reputation, because his financial standing, his credit, depends on the opinion others have of him. Therefore, commerce makes us behave ourselves. This theory, popular in the 18th century, is tied to the development of the international credit markets – governments have to behave themselves now because they need to maintain the approval of investors (this is well-explored in Hirschman’s The Passion and the Interests).

We can still see this liberal ethics of status and shame today, particularly in the neo-liberalism of the last few decades. It was believed that countries and companies are kept honest by ‘market discipline’ – by the gaze of shareholders and investors. If a finance minister or a CEO behaves badly or shamefully, if they fail to govern themselves or to apply fiscal discipline, they will be punished by the market. Likewise, our culture is ever-more dedicated to seeking the approval of that god, Public Opinion, whose attention we seek through blogs, tweets, YouTube videos, reality TV shows, through any publicity stunt we can keep up. The greater your public following, the greater your power.

Yet, right at the very birth of liberalism in the fifth century BC, a critique of it arose, also based on shame.

Plato insisted that liberal democratic societies had produced a false morality, a morality of spectacle. We only care about looking good to others, rather than actually being good. He illustrated this with the myth of the ring of Gyges, which makes its wearers invisible. If we had that ring, and were protected from the gaze of others, would we still behave ourselves, or would we let ourselves commit every crime imaginable? If all that is keeping us honest is the gaze of other people, then what really matters is keeping your sins hidden from the public.

Civilisation, Plato suggested, had made us alienated, which literally means ‘sold into slavery’. We have become slaves to Public Opinion, before which we cringe and tremble like a servant afraid of being beaten. We contort ourselves to fit the Public’s expectations, no matter how much internal suffering and misery it causes us. It’s far more important to look good to the Public than to actually be happy and at peace within. So we put all our energy into tending our civilised masks, our brands, our shop-fronts, while our inner selves go rotten.

One finds a similar critique in the Cynics and the Stoics, both of whom lambast their contemporaries for being pathetic slaves to public opinion, who tremble at the prospect of advancement or being snubbed. A man of virtue, they insist, cares only for whether they are doing the right thing, they don’t care how that looks to the public, how it plays on the evening news. Against the spin and sophistry of liberalism, they put the steadiness and self-reliance of virtue.

Diogenes the Cynic, attacking shame

The Cynic takes the revolt against liberal morality to an extreme. It’s a hypocritical morality, the Cynic insists, that divides our public from our private selves, and which makes us hide behaviour that is in fact perfectly natural. The Cynic breaks down the wall between the public and the private self. Anything which one is happy to do in private – such as defecation, say, or farting, or masturbation – one should be equally happy to do in public. Cynics trained themselves to de-sensitise themselves to public ridicule, not just for the hell of it, but so that they could move from a false ethics based on looking good to others, to a true ethics based on obedience to one’s own ethical principles.

Today, perhaps, we are more than ever obsessed with our public standing, and terrified of public ridicule. As Theodore Zeldin wrote: ‘Creating a false impression is the modern nightmare. Reputation is the modern purgatory.’ We live, as Rousseau put it, ever outside of ourselves, in the opinions of others (this, in fact, is the meaning of paranoia – existing outside of oneself). This desperate need for public approval, and terror of shame or obscurity, is, I would suggest, at the heart of many of the discontents of liberal civilisation – social anxiety, depression, narcissism.

Yet we don’t have to accept these discontents as an inevitable part of civilisation, as Sigmund Freud or Norbert Elias argued. We can in fact modulate shame. We can reprogramme shame by reprogramming the attitudes and beliefs which direct it. As Plato, the Stoics and the Cynics suggested, we can challenge the values that give so much importance to status and reputation, and learn to embrace new values, which focus less on public opinion, and more on being true to our own principles. Many of the Greeks’ techniques for cognitive change are found in Cognitive Behavioural Therapy today – including a technique specifically designed to help people overcome a crippling sense of shame or self-consciousness. It’s called ‘shame-attacking‘, and involves intentionally drawing attention and ridicule to yourself in order to de-sensitise yourself to the experience, just as the Cynics did 2400 years ago.

The shame of the philosophers

An intriguing new book about the philosophy of the emotions has just been published by Oxford University Press. It is not only a study of shame, but also a spirited defence of that emotion, co-authored by three philosophers: Julien DeonnaRaffaele Rodogno, and Fabrice Teroni. The book sets out to overthrow the received view of shame as a social but morally ugly emotion. The authors agreed to answer a few questions about shame as a feeling, an emotion, and an historical (or universal) phenomenon, for the History of Emotions blog.

Perhaps you could start by briefly explaining the two ‘dogmas’ about shame that you set you to refute in your new book, and why you think they are mistaken?

What we rather polemically call dogmas are two claims that structure much of the recent discussions on shame and that are very often taken for granted. First, there is the claim that shame is essentially a social emotion. One important consequence of this dogma is that shame is perceived as morally superficial because, in shame, one allegedly only reacts to how one appears to others. Second, there is the claim that the action tendencies associated with shame (hiding and aggression, for instance) or the emotional conditions that it is associated with (lack of empathy, depression) make it a morally ugly emotion we should try to get rid of. Now, these two dogmas form the contrastive background on which we build our own account of shame. In the process, however, we also try to reveal the grains of truth that these dogmas contain, even if they ultimately have to be rejected, for they foster wrong pictures of shame.

We argue, against the first dogma, that shame is not social in any substantial sense of the term. It is for instance not always elicited in the presence of real or merely imagined others. It is also wrong to think that shame exclusively targets a subset of the values to which we are attached, namely those linked to our image or reputation. One can be ashamed of appearing dishonest to someone, but one can also be ashamed of having behaved dishonestly, period. The grain of truth in the first dogma is that our reputation is a central value for most of us, and, for that reason, elicits shame when we perceive it as threatened. But it is only one value amongst many other values to which most of us are attached and that elicit shame when we perceive what we do or who we are as threatening them.

We criticize the second dogma linked to shame’s ugly action tendencies and associated conditions by arguing that the empirical data on which it is based are far from conclusive for a variety of reasons, such as the absence of distinctions between shame, shaming and humiliation, as well as between rational and irrational forms of shame. On the basis of these distinctions, we suggest that, while shame may sometimes connect with morally problematic action tendencies and emotional conditions, this is definitively not always the case. More generally, our view on shame is, against the dogmas, that shame is the guardian of our personal values and that, when everything goes well, it fosters behaviour which is in keeping with these values.

One of the questions that kept coming back to me when I was reading your book is whether there is really a distinctive feeling of shame. If you could be just dropped into the middle of the mental experience of someone feeling intense shame, but were not told anything cognitive or social about their situation, would you be able to say – ‘Ah, yes, this person is feeling intensely ashamed’? Or might you rather just identify their feeling as being ‘upset’ or ‘distressed’?

This is a very interesting and quite difficult question, for it is at the heart of many current debates in emotion theory. We do not address it directly in the book and the reason for this is that we take the answer to be more obvious than it probably is. The general issue is how rich the phenomenology of emotions is, and in fact whether it is rich enough to differentiate at least a substantial number of emotion types. Perhaps the relevant phenomenology is comparatively poor and suited only to differentiate very coarsely between those emotions that feel good and those that feel bad. Whatever one’s position on that issue in general, we tend to believe that the phenomenology of shame is very distinctive, all the more so if we consider intense episodes. Indeed, phenomenological reports of shame – by contrast with those of guilt for example – tend to converge on a set of basic features (feeling one’s face becoming hot, feeling of shrunken bodily posture, one’s gaze going away from what triggers shame, wanting to disappear, loss of control, etc.) many of which being subtended by very strong bodily changes (contrast again with guilt).

Moreover, and this is not a point we develop in the book either, we tend to think of the phenomenology of emotions to be very closely linked with their intentionality, i.e. with the way the world is made manifest to us when having the emotion. So that feeling shame is difficult to separate from “the cognitive and social” dimensions of the situation in which it is elicited and is, in and of itself, already a case of taking oneself to be worthless. All this favours the thought that if someone was dropped in the middle of a shame episode, and that, in addition, this person was used to reflecting on and classifying her emotions, she would have no difficulty in recognising it as shame.

Following on from this, I wondered whether there is an important difference between being ashamed and feeling ashamed. Perhaps shame is not primarily a recognisable feeling with distinctive associated expressions and actions, but rather a judgement about oneself. Can one be ashamed not only without having any particular feeling, but perhaps without experiencing any emotional feeling at all?

Given our answer to your last question, you may predict at least part of our answer to this one. First of all and again, contrast shame with guilt in this respect. While it makes perfect sense to say “I am guilty, but I don’t feel guilty”, it is very strange to say “I am ashamed, but I do not feel ashamed”. This is because the locution “I am ashamed” is typically used to report on one’s present emotional condition; not so when I say “I am guilty”. This being said, while the said locution typically reports on one’s present emotion, it does not always do so. Part of the answer then consists in distinguishing between attributions of shame as a disposition as opposed to an episode. It makes perfect sense, for example, to think of someone as being ashamed of his background while not feeling ashamed of it right now – indeed he might be fast asleep. But this is one of the rare senses, we contend, in which it is possible to be ashamed without feeling ashamed and of course it constitutes no objection to the idea that shame is essentially felt. Another kind of case involves using the phrase just as a move in a language game. “I am ashamed but I forgot to send the letter” may just mean “sorry” and its function is perfectly well performed although one does not feel anything. This again should not be viewed as an objection to the idea that shame is essentially felt.

It is perhaps interesting to add that the literature is in quite broad agreement on this issue, and unless one believes of all emotions that their phenomenology greatly underdetermine their nature/identity (a common position in emotion theory) one will think of shame as phenomenologically rich and distinctive. This is again in sharp contrast with guilt, which has been claimed by some (e.g., Ortony 1987) not to be an emotion at all precisely because it has no salient and distinctive phenomenology and associated expression or display. Finally, it is worth emphasising that all these points about the felt character of shame are compatible with your idea that shame is essentially a judgement. While thinking of emotions as judgements is perhaps to over-intellectualise them, we agree that emotions are essentially evaluative attitude we take towards objects. In our next book on the nature of emotions generally – which will come out in March – we try to explain the way in which emotions are essentially felt evaluative attitudes.

Great – another book to look forward to in March! So, what do you think the difference is between shame and embarrassment? Is this difference important for your argument?

Shame and embarrassment are, it is true, often elicited in the same circumstances, and we sometimes oscillate between these two emotions. Still, that does not mean that they do not differ along important dimensions. Once we start thinking about it, embarrassment appears to be much shallower than shame. Moreover, it is also important to emphasize that, while shame can be felt in private, this seems not to be the case for embarrassment. These two aspects of embarrassment – its relative shallowness and its social character – can be argued, as we do in the book, to reflect the fact that, in embarrassment, one perceives oneself as appearing – that is, merely appearing – in a given way to a given audience or as being cast in a public role that one does not know how to play. This focus on appearances only explains why embarrassment is shallow: it is so because one does not perceive one’s action as revealing anything about who one is, i.e. as someone incapable of discharging the personal values to which one is attached – which would be the case in shame.  This characterization of embarrassment is quite important for our discussion, and more specifically for our attempt at criticizing some widespread arguments against shame that target its superficiality and on this basis conclude that it cannot play any substantial moral role (this is the first dogma above). In a nutshell, these arguments miss their target because they do not distinguish shame from embarrassment. While shame may also connect with appearances, we argue that it always affects the self in a much more dramatic way. If one were to feel shame rather than embarrassment because, say, one has committed a faux-pas, this is because one perceives this faux-pas as projecting more than merely an undesired image to a given audience; it has for instance to be perceived as reflecting badly on the type of person one is. This deeper relation to the self and its values is, we believe, a fundamental aspect of shame.

Finally, could I ask you what thoughts you have about the relationship between the philosophy of emotion and the history of emotions? Is there room for fruitful scholarly cooperation, and if so, of what kind?

A brief overview of the role and importance of shame in history is part of what has triggered our curiosity for this topic. As we ask in our Introduction, why is it that thinkers such as Aristotle and Hume thought quite highly of shame while we, today, think it shallow and even ugly? We also tend to believe that our account of shame – which does not draw at all on the historical aspects of this emotion – is particularly amenable to fruitful collaborations with historians. One of the main points of the book is that we should refrain from thinking that shame is consubstantially linked to any one type of value or family of values. While shame always occurs when we severely undermine values to which we are personally attached and always involves apprehending ourselves as worthless, the range and type of values involved and the particular pieces of behaviour and traits that count as breaching these values varies immensely from one historical period and context to the next. It has been claimed that shame is fundamentally tied with sex, with dignity, with integrity or with the judging gaze of others, etc. While the evidence in favour of some of these claims may look more or less impressive if and when focusing on one historical period rather than on another, it is in fact just a mistake to look into the past for evidence pertaining to the nature of shame. Looking at the specific ways this emotion is culturally shaped and the various sorts of values to which it is sensitive in different historical settings is crucial not so much for our understanding of the nature of shame, but insofar as it constitutes an invaluable entry into the kind behaviour and traits that were thought to be degrading, and thus, more generally, the ways our more or less distant relatives conceived of themselves, their identity. To study the history of shame is thus to study the changing boundaries of what qualifies as a decent relationship with one’s values, as well as the various values that may be prevalent in different historical contexts. And there are reasons to think that such a history will have to be nuanced and complex. Our conception of shame, for instance, clearly goes against the idea that the prevalence of this emotion is a characteristic of primitive, less than morally fully developed societies (here, albeit for different reasons, we concur with Bernard Williams’ seminal discussion of shame in Ancient Greece). And while different societies at different times have surely put emphasis on different values, people have shown a much more constant attachment to some of them than many historians used to think (Duerr’s criticism of Elias’ understanding of pudor and shame is in this respect very instructive).

Do you believe, then, that there is an unchanging essence of shame (perhaps having a red face, averting your eyes, feeling small, and believing yourself to have fallen short of your own values) that has always existed, but has been given different names, and been hooked up with different value systems, in different historical cultures? If so, is this something that you think is a biological given? Or am I over-stating your essentialism here?

We are not (too) ashamed to say you are not far off the mark in the way you describe our essentialism here – shame is indeed characterised by the bodily feelings you mention and these feelings contribute in a holistic fashion to the subject taking some disvaluable fact as manifesting her worthlessness. In subscribing to this sort of essentialism, we agree with psychologists who, after due empirical research, by and large agree with the idea that shame, as they would put it, is a “pan-human” emotion. Two things should be kept in mind, however. The first is well acknowledged in the way you ask the question. Our conception of shame is abstract to a degree that makes it apt to account for the many different ways in which this emotion has been triggered and expressed through the ages. Second, and as we make clear in the book, nothing in our essentialism implies that shame has always been the same, i.e. prevents it from having evolved from an emotion (proto-shame for example) whose logic and function within the communities formed by some of our distant ancestors were simpler and different than those of shame as we know it today. It is sensible to claim that shame has evolved to be this pan-human emotion that we try and cash out in our book.

In Defence of Shame: The Faces of an Emotion is published by Oxford University Press.

Ugh! A brief history of disgust

Which emotion links some people’s attitudes to the banking bonuses, journalists hacking the voicemail of Milly Dowler’s parents, our reaction to eating bitter lemons, the smell of Hydrogen Sulphate, and the Holocaust? The answer is disgust.

Disgust is a powerful emotion. Unlike other emotions, it exists in two areas. On one hand, we find things ‘physically’ disgusting, such as the smell of rotten eggs or the taste of bitter lemon. One the other hand, we can be ‘morally’ disgusted at the actions of others, such as in the case of the Milly Dowler phone hacking, or the greed of the banking industry. Disgust crosses the moral/physical divide like no other emotion and, as a result, it has recently become a hot topic within the academic world.

From the early modern period onwards, a number of thinkers have discussed disgust. The first was Thomas Hobbes. Hobbes claimed that “when the Endeavour [an action] is fromward [away from] something, it is generally called AVERSION”. Hobbes realised that these aversions were sometimes “born with men” and at other times “proceed from experience” and saw aversion, and its opposite, “appetite”, as the two great overriding emotions that explain human motivations.

Immanuel Kant was the first to look at disgust in its own right.  Kant took disgust as a primarily aesthetic emotion: ugliness in the extreme. He was also the first to describe the symptoms of disgust in a way we would recognise today. He knew that sensory experience was the key, and that disgust was connected to what he called the “lower class” senses: taste and smell. These senses, according to Kant, are linked primarily with pleasure through oral intake. Smell, more able to intimately sense what we are placing near our mouths than taste, acts as a trigger for physical reactions to disgust: facial grimacing, nausea and vomiting.

Charles Darwin was convinced that these reactions were inherent, describing the ‘plainly expressed’ disgust of babies as evidence for an evolved harm avoidance system, something many modern evolutionary psychologists agree with. Sigmund Freud tied Kantian ideas of oral fixation to the ‘sexual zones’. Freud linked our ability to walk upright with “raising our organ of smell form the ground,” taking it away from the “position of the genitals”. Freud believed that our genitals are beyond change and have “remained animal”, thus explaining the disgust felt for these areas.

After Freud, psychology took a back seat and disgust became mired in dirt – literally. In 1966 anthropologist Mary Douglas told us that “dirt is matter out of place.” Dirt, to Douglas, was any anomaly that defied “basic [social] assumptions”. The pursuit of cleanliness was, therefore, an attempt to restore order and put the matter back in its place, physically and morally. Douglas was a powerful influence, used, for example, by Simon Schama in The Embarrassment of Riches to explain the obsession with cleanliness in the art of the Dutch golden age. Many other studies on cleanliness and dirt followed, but disgust was all but ignored.

This was until Paul Rozin began to take an interest in Disgust in the 1980’s. Rozin’s ‘Hitler Pullover’ experiment – where people are asked if they are prepared to wear a pullover for a cash reward and are then, before donning the garment, informed it once belonged to Adolf Hitler – has become the stuff of legend. Rozin was the first to notice a link between disgust, essentialism (the idea that we attribute a soul-like ‘essence’ to certain objects) and sympathetic magic (the idea that this essence is transferable from person to object and vice versa). For many years Rozin pressed on with his research all but alone, but early in the last decade Professor Bruce Hood (of 2011 Christmas Lectures fame – see below, 17 minutes in) repeated many of Rozin’s experiments while investigating supernatural belief for his book SuperSense. This researched showed that ideas of essentialism and sympathetic magic, and so ideas of disgust, do exist in preschool children. Perhaps disgust is inherent after all?

Since this research, a number of books about disgust have been become popular. These include half a dozen books on the psychology of disgust as well as many in other disciplines, such as Carolyn Korsmeyer’s investigation into aesthetics in Savouring Disgust, the sociological work of William B. Miller’s Disgust, the phenomenological research of Aurel Kolnai’s On Disgust, Robert Rawdon Wilsons literary investigation The Hydra’s Tale, the Wellcome Collection’s Dirt exhibition and book, and more recently, Daniel Kelly’s superb interdisciplinary overview, Yuck! The Nature and Moral Significance of Disgust.  The importance of disgust, described by psychologist Susan B. Miller as “the gatekeeper emotion”, has been rediscovered; everyone wants a piece of it.

So what next for Disgust? Psychology is already taking great strides, but the other social sciences have great scope to expand our knowledge also investigating the social, cultural and even economic impact of disgust. With its new found energy, it is seems inevitable that academics in every discipline from philosophy to sociology will soon be taking the subject on.

I for my part, intend to contribute by to making early modern history just a little bit more disgusting.

Richard Firth-Godbehere is currently studying History and the History of Ideas at Goldsmiths, University of London and will be researching the Cultural and Intellectual History of Disgust in Early Modern Europe at Cambridge University from this autumn.

Should we all be popping ‘morality pills’?

Over at the New York Times’ excellent Opinionator blog, philosophers Peter Singer and Agata Sagan ponder whether we should all be prescribed ‘morality pills’ to make us more altruistic.  The authors write:

Researchers at the University of Chicago recently took two rats who shared a cage and trapped one of them in a tube that could be opened only from the outside. The free rat usually tried to open the door, eventually succeeding. Even when the free rats could eat up all of a quantity of chocolate before freeing the trapped rat, they mostly preferred to free their cage-mate. The experimenters interpret their findings as demonstrating empathy in rats. But if that is the case, they have also demonstrated that individual rats vary, for only 23 of 30 rats freed their trapped companions.

The causes of the difference in their behavior must lie in the rats themselves. It seems plausible that humans, like rats, are spread along a continuum of readiness to help others. There has been considerable research on abnormal people, like psychopaths, but we need to know more about relatively stable differences (perhaps rooted in our genes) in the great majority of people as well.

Undoubtedly, situational factors can make a huge difference, and perhaps moral beliefs do as well, but if humans are just different in their predispositions to act morally, we also need to know more about these differences. Only then will we gain a proper understanding of our moral behavior, including why it varies so much from person to person and whether there is anything we can do about it.

If continuing brain research does in fact show biochemical differences between the brains of those who help others and the brains of those who do not, could this lead to a “morality pill” — a drug that makes us more likely to help? Given the many other studies linking biochemical conditions to mood and behavior, and the proliferation of drugs to modify them that have followed, the idea is not far-fetched. If so, would people choose to take it? Could criminals be given the option, as an alternative to prison, of a drug-releasing implant that would make them less likely to harm others? Might governments begin screening people to discover those most likely to commit crimes? Those who are at much greater risk of committing a crime might be offered the morality pill; if they refused, they might be required to wear a tracking device that would show where they had been at any given time, so that they would know that if they did commit a crime, they would be detected.

Fifty years ago, Anthony Burgess wrote “A Clockwork Orange,” a futuristic novel about a vicious gang leader who undergoes a procedure that makes him incapable of violence. Stanley Kubrick’s 1971 movie version sparked a discussion in which many argued that we could never be justified in depriving someone of his free will, no matter how gruesome the violence that would thereby be prevented. No doubt any proposal to develop a morality pill would encounter the same objection.

But if our brain’s chemistry does affect our moral behavior, the question of whether that balance is set in a natural way or by medical intervention will make no difference in how freely we act. If there are already biochemical differences between us that can be used to predict how ethically we will act, then either such differences are compatible with free will, or they are evidence that at least as far as some of our ethical actions are concerned, none of us have ever had free will anyway. In any case, whether or not we have free will, we may soon face new choices about the ways in which we are willing to influence behavior for the better.

This may sound like science fiction, but many young neuroscientists are already researching morality pills, including Molly Crockett at the University of Cambridge; and Julian Savalescu & Guy Kahane at the University of Oxford, who are two of the authors of ‘Enhancing Human Capacities’, published in 2011. I haven’t read that book yet, but it looks absolutely fascinating. People have also considered the use of Ecstasy / MDMA to enhance empathy. And of course, scientists are now researching the use of psychedelic drugs to help people overcome depression and gain greater meaning in life. Hard to know whether to think of that as a resurgence of spiritualism, or the final triumph of mechanism….

As to the morality of ‘morality pills’…well, what do you think?

One could argue, perhaps, that many of us use personality enhancers – coffee to make us work faster, wine to make us more social (a sort of morality drug). On the other hand, as a friend of mine pointed out, who gets to decide what is moral and how the human personality should be chemically altered? What if such drugs are imposed on us without our consent (as they are often imposed on people suffering from schizophrenia to make sure they fit into our socio-ethical system)? What about the case of Alan Turing, the computer genius who was chemically castrated by the British government to stop him being homosexual?

In my opinion, scientists today, and even many philosophers, are far too happy to give up on the idea of responsibility, free will, human rationality etc. When you do give up on it, it very quickly means handing over power to an elite or ‘grand controller’ to steer the automatons of the masses in the right direction. It’s amazing, and startling, how quickly that idea is becoming mainstream and respectable.

‘Disgust is so hot right now’

An interesting piece in the New York Times, looking at the growing amount of academic interest in the emotion of disgust:

Disgust is having its moment in the light as researchers find that it does more than cause that sick feeling in the stomach. It protects human beings from disease and parasites, and affects almost every aspect of human relations, from romance to politics.In several new books and a steady stream of research papers, scientists are exploring the evolution of disgust and its role in attitudes toward food, sexuality and other people.

Paul Rozin, a psychologist who is an emeritus professor at the University of Pennsylvania and a pioneer of modern disgust research, began researching it with a few collaborators in the 1980s, when disgust was far from the mainstream. “It was always the other emotion,” he said. “Now it’s hot.”

The article goes on:

The research may have practical benefits, including clues to obsessive compulsive disorder, some aspects of which — like excessive hand washing — look like disgust gone wild. Conversely, some researchers are trying to inspire more disgust at dirt and germs to promote hand washing and improve public health. Dr. Valerie Curtis, a self-described ‘disgustologist’ from the London School of Hygiene and Tropical Medicine, is involved in efforts in Africa, India and England to explore what she calls “the power of trying to gross people out.” One slogan that appeared to be effective in England in getting people to wash their hands before leaving a bathroom was “Don’t bring the toilet with you.”

Disgust was not completely ignored in the past. Charles Darwin tackled the subject in “The Expression of the Emotions in Man and Animals.” He described the face of disgust, documented by Guillaume-Benjamin Duchenne in his classic study of facial expressions in 1862, as if one were expelling some horrible-tasting substance from the mouth. “I never saw disgust more plainly expressed,” Darwin wrote, “than on the face of one of my infants at five months, when, for the first time, some cold water, and again a month afterwards, when a piece of ripe cherry was put into his mouth.” His book did not contain an image of the infant, but fortunately YouTube has numerous videos of babies tasting lemons.

Let’s see some of that lemon-eating fun (no babies were harmed in the course of these experiments)…

The Natural History Museum: temple to science, God…or both?


Alain de Botton keeps coming up with new projects for his religion for atheists, and I admire his energy and willingness to put his ideas into practice. It’s refreshing. His latest plan takes very concrete form: he wants to build temples for atheists, and is starting off with a pillar in London to give people a sense of perspective: it will show the history of the universe, with a tiny gold band at the bottom showing how recently man came on the scene. Good stuff: though a Stoic or even some Christians would argue this was just as conformable to theist as atheist beliefs.

But naturally, the more ambitious and serious De Botton gets about his project, the more criticism he will encounter. Sure enough, Steve Rose wrote today in the Guardian that De Botton’s project sounds increasingly like a religion. Well, yes, that’s the point Steve. That’s why he called his book ‘A Religion for Atheists’. But we don’t need a new religion, says Steve. If atheists need monuments, they already have the Large Hadron Collider, the Natural History Museum, Wembley Stadium, even the Westfield Shopping Centre.

Not sure about that last one, though I guess it is certainly a monument to consumerism. Perhaps Steve is right – perhaps Las Vegas is a monument to atheism, a paradise city where everything is permitted and nothing is sinful. It’s where the Sceptics have their annual gathering, appropriately enough. Or is that the ‘wrong’ kind of atheism for Alain?

Anyway, of all Steve’s examples, it struck me that the Natural History Museum was closest to what Alain perhaps has in mind. The central hall of the museum really is very like a cathedral, with a sculpture of Darwin where the crucifix would be, and a giant (fake) skeleton of a diplodocus reminding us of the creation and destruction of nature, and the apparent absence of divine providence.

But is that really the message of the museum?

I looked into it today, and the real story is a little stranger. In fact, the founder of the museum, Sir Richard Owen, believed in transcendental morphology. He believed that a divine creative force moved through creation, and that God revealed itself through the evolution of nature. I quote from Nicholaas Ruupke’s Richard Owen: Biology without Darwin. Owen believed that:

The history of scientific discovery had been a process of gradual self-revelation by God, not accidental but guided by illumination of ‘His faithful servants and instruments’, the scientists. ‘No scientific discovery collides against any sentence of the divine Sermon on the Mount’ [Owen declared].

Owen believed God’s self-revelation has been a continuous progressive process, with new insights and information downloaded (as it were) in chunks, and accessed by prophets and scientists through history. He tried to combine belief in a transcendent creator with scientific optimism in evolution, and ended up falling out with both Darwin and the Church of England in the process. In one service of 1876, for example, the priest criticised those who tried to replace God with science. To the shock of the congregation, Owen harangued the priest, declaring: ‘My Christian brethren! I trust with God’s help, that science will continue to do for you what she has always done, return good for evil!’

When Owen successfully lobbied for the establishment of a Natural History Museum in London, it was designed by the architect Alfred Waterhouse specifically as a ‘Temple of Nature’ to embody Owen’s vision of a nature guided by God’s transcendent power. In the words of the journal Architectural History:

The Temple of Nature that Alfred Waterhouse built embodied Owen’s belief that the history of the natural world was not a matter of randomness and chance but the creation of a transcendent presence.

So the Natural History Museum is really a monument to a moment in science before it moved in the direction of reductive scientific fundamentalists like Dawkins or Hawking, a moment of broader thinking – represented today by a handful of thinkers working at the cutting edge of science like James Lovelock, Roger Penrose or Rupert Sheldrake, who challenge reductive Darwinism and are able to think outside its narrow functionalism. Owen was a champion not of atheism but of that rare optimistic belief that science and theism are not incompatible, that scientists are revealing the transcendent power that moves through creation, and that there is more in heaven and earth than is dreamt of in Darwin or Dawkins’ philosophy. His statue looked over the hall until 2009, when it was replaced by a statue of Darwin to mark his centenary. Time to bring the original founder back.

AA Long on Marcus Aurelius and the Self

For fans of Stoicism, here is a rare chance to watch the master, Professor Anthony Long of Berkeley College, giving a talk on Marcus Aurelius. Long is perhaps most to be credited with reviving the academic study of Stoicism, which was almost completely ignored by classicists and philosophers for most of the 20th century. It was only when Long held a series of eight seminars on Stoicism at Oxford in 1970 that the picture started to change – and that philosophers like Myles Burnyeat, Jonathan Barnes, Richard Sorabji and Martha Nussbaum started to focus not solely on Plato and Aristotle, but also on the Hellenistic philosophers (and chiefly the Stoics) and their practical therapies of emotion.

Long is also an enthusiast for not just treating Stoicism as an academic subject, but trying to apply its ideas and practices in life today. He’s fascinated that people are trying to be Stoics today, and he told me in an interview back in 2008 that he’s also started trying to follow Stoicism in his own life – and even to give talks on it in San Quentin prison! He mentioned after this week’s talk that Bernard Williams, the famous Oxford philosopher, had been very scornful of the idea of Stoic / Socratic therapy. ‘But then’, Long said, ‘I’m not sure Williams ever really suffered.’ What I like about Long is he wants to bring the ideas of Stoicism to ordinary people, beyond academia. But, at the same time, he doesn’t want to turn Stoicism into a bland watered down self-help, but to explore some of the challenges and paradoxes in the Stoic tradition.

In this talk, he discusses Marcus Aurelius’ idea of the self, and some of the paradoxes and problems involved in it. Are we to identify our ‘self’ with our reason only, or what the Stoics call our ‘ruling faculty’? What does it mean to ‘separate’ our reason or consciousness from our physical desires and passions – and is this really possible? Is our rational consciousness ‘us’, or is it a fragment of God, and so not really ‘us’ at all? Is our consciousness separate, alone, cut off from everything, or are we a tiny part of some great network of consciousness (in which case how much control over ourselves do we really have?) Not easy questions, but it is characteristic of Long not to shy away from asking them.


 

Why is British law on assisted suicide ‘inadequate and incoherent’?

Director of Public Prosecutions (DPP) Kier Starmer. Copyright (c) Press Association.

The Commission on Assisted Dying (COAD) published its report this month, recommending ‘providing the choice of assisted dying for terminally ill people.’ The report’s proposed changes focus specifically and exclusively upon ‘terminal illness’. David Cameron indicated prior to the report’s publication that he would resist changes in the law, already noted by Jules Evans on this blog. The present operation of the law was influenced by the success of Debbie Purdy’s campaign in 2010 to get the Director of Public Prosecutions (DPP) [Kier Starmer, pictured] to lay out the factors that might contribute to a prosecution for ‘assisted suicide’ being pursued ‘in the public interest’, and those in which the DPP would ‘exercise discretion’ and not prosecute, without changing the law. This situation is labelled ‘inadequate and incoherent’ by COAD. If legal incoherence is the key to reform, it is vital to understand the history of such a disorganised situation.

One of COAD’s key foci is the 1961 Suicide Act which decriminalised suicide in England and Wales. The report states that it is ‘almost universally accepted that there needed to be some change to the terms of the Suicide Act 1961’, specifically section 2(1) that created the offence of ‘assisted suicide’. Whilst suicide ceased to be a crime, the specific offence now referred to as ‘assisting suicide’ was created. The DPP’s oral evidence to COAD highlights the uniqueness of the legal position: ‘Under the 1961 Act there is obviously a broad offence of assisted suicide, it’s obviously peculiar because you’ve got aiding and abetting — using the old language — conduct which is not itself unlawful so you’re in very odd territory.’

Suicide was decriminalised in 1961 for a variety of reasons, including a broad concern about criminal law intervening in areas of ‘moral conduct’, and the unnecessary ‘distress’ caused to the family of the deceased by the taint of criminality. However, a key motivation behind the Act is revealed in a fractious exchange between Prime Minister Harold Macmillan and his reformist Home Secretary R.A. Butler. Macmillan grumbled, ‘[m]ust we really proceed with the Suicides [sic] Bill? I think we are opening ourselves to chaff if, after ten years of Tory Government, all we can do is to produce a bill allowing people to commit suicide. I don’t see the point of it.’ Butler countered that ‘[t]he main object of the Bill is not to allow people to commit suicide with impunity… It is to relieve people who unsuccessfully attempt suicide from being liable to criminal proceedings.’

These ‘unsuccessful attempters’ – for which the stereotype was women under 30 – were fast becoming an epidemic phenomenon in British hospitals. Erwin Stengel, an authority on the subject, related in 1959 that ‘the police officer may inform the doctors in hospital that… he will bring a [criminal] charge for the purpose of having the patient put on probation under the condition that he consents to hospital treatment’, whether the doctor thinks such treatment is appropriate or not. Stengel also lamented that many patients lie about having attempted suicide because they ‘are afraid that the doctor may report their offence to the police’ denying themselves ‘the possible benefit of psychiatric treatment which is impossible if the truth is withheld from the doctor.’

Thus the decriminalisation in the 1961 Act can be seen as part of a government strategy aiming to provide better care for people who had ‘attempted suicide’. However, it was noted that such changes could open unintended loopholes. The terms of reference for the Criminal Law Revision Committee set up by Butler to investigate the practicalities of the law change showed acute awareness of this:

‘[t]he abolition of the offence of suicide would involve consequential amendments of the criminal law to deal with offences which would cease to be murder if suicide ceased to be self-murder… on the assumption that it should continue to be an offence for a person (whether he is acting in pursuance of a genuine suicide pact or not) to incite or assist another to kill or attempt to kill himself, what consequential amendments in the criminal law would be required[?]’

This is achieved through clause 2(1) which created the offence of ‘aiding, abetting, counselling or procuring the suicide of another’. Concerns around the potential culpability of supposed ‘suicide pact’ survivors – which came to prominence around the Homicide Act (1957) – were key in the creation of the law now contested in very different circumstances by COAD.

A concern about the right (psychiatric) care of those who survived ‘suicide attempts’ led to the creation of an offence specifically to limit the knock-on effects of decriminalisation and to make this ‘retraction’ of the law relate as precisely as possible to ‘attempted suicide’. The offence corresponds to a debate far removed from the current one around ‘terminal illness’. The concern of section 2(1) about undue ‘influence’ when assisting suicide is one of the only current resonances even though, in 1961, it aimed to protect minors, rather than the terminally ill.

The offence was created so that two specific and different kinds of ‘suicidal behaviour’ could be dealth with in different ways: providing for psychiatric treatment and closing resulting ‘loopholes’ for homicides disguised as failed ‘suicide pacts’. To expect it to function properly in debates over the rights of the terminally ill to retain dignity whilst protected against undue pressure – involving yet another type of ‘suicide’  – is unrealistic. The current Prime Minister may worry, like Macmillan, about opening himself to ‘chaff’ on suicide, but Starmer’s effort to bridge the gap between the law and the current debate requires a dangerously inconsistent ‘discretion’:

‘the position of the prosecutors has been historically that we won’t indicate in advance whether conduct is criminal or not… I do recognise that for professionals and others it can leave them feeling a little bit exposed when all they really want is some guidance.’

If the law is to work consistently, it must correspond more closely to the changing types of behaviour and debates that it is supposed to regulate. Historical understanding of the current ‘incoherence’ adds another voice to the call for change.

The Optimism Bias – overly pessimistic?

Yesterday I got the chance to meet the neuroscientist Tali Sharot and discuss her new book, The Optimism Bias, on Radio 3’s Night Waves. You can listen to the show here – it’s the last segment.

I enjoyed Tali’s book, and enjoyed meeting her, but I tried to express a polite scepticism about Tali’s thesis, which I characterised as:

1) We are hard-wired to see the world through the rose-tinted spectacles of delusion

2) This causes us lots of benefits – it means we live longer, are happier, are not crippled with angst at the prospect of death. Optimistic people are also more likely to achieve their goals. The bias also causes us some problems however – like the Credit Crunch or over-optimistic wars like Iraq.

3) Even if we wanted to take off these rose-tinted spectacles, we probably couldn’t. Because they’re neurally wired into our brains, we are destined to make the same mistakes, ‘every single time’

4) And this is a good thing, because if we saw the world as it actually was, we’d be clinically depressed

I put it to Tali that this was a very pessimistic view of human existence. She countered that I’d misread the book, and that actually the last chapter recognised the pitfalls of the optimism bias, which is why she suggests we need to recognise the bias and try to balance it out.

But I don’t think I did misread it. She hardly discusses ways to balance out the optimism bias at all, besides one sentence saying optimism is like red wine – one glass is good but downing the whole bottle is bad (a line she got from two other optimism experts, Manju Puri and David Robinson). Most of the rest of the book she argues that the optimism bias is ingrained in our brains, involuntary, automatic and unchangeable – and that it is good that this is so.

Throughout the book, she uses the deterministic, mechanistic, fatalistic language of neuroscience – she talks about our ‘seemingly automatic inclination to imagine a bright future’, argues optimism is ‘hard-wired’ into our brain, ‘it takes rational reasoning hostage’…’these defects are overpowering’…’No matter how hard we try, some mental and emotional processes are likely to remain hidden’, ‘you are still fooled. Every time. Every single time.’

Even when she explores how people find the ‘silver lining’ to catastrophic events like being paralysed, she says that this is a ‘trick’ the brain plays – as if learning to accept being paralysed is not a choice, or effort (in other words something voluntary, difficult, brave and noble) but rather an automatic and involuntary delusion. I think this is unfair to people who recover from trauma. Some people don’t recover from trauma. They kill themselves. Those who do recover have to really work hard to accept their situation. It’s not a ‘trick’ their brain plays on them. It’s a conscious, rational choice to see the facts of the situation, and accept them.

Tali insists in the book it’s good that our eyes are covered by automatic illusion blinders, because if we saw the world as it really was, we’d be clinically depressed and would die quicker. We need our illusions, she argues. She very briefly discusses the theory of ‘defensive pessimism’ – the idea that for some people, it’s adaptive and useful to think through what could go wrong, so that they can prepare for it. But just a few sentences later she insists ‘negative expectations can – literally – kill us’.

This is a startling claim. Get optimistic now or you will die! But her evidence for this very broad claim is just one study. It found that people who had a heart attack and were sure they were going to die, did indeed die before too long. Well, we can all agree that if you are very ill and are certain you are going to die very soon, it will negatively affect your chances of survival. But that isn’t pessimism – that’s outright fatalism. Some other forms of pessimism – or tempered expectations – might be adaptive in other circumstances.

Her shakiest claim is that optimism is good because it creates self-fulfilling prophecies that make things happen. She gives very little evidence for this claim, which is perilously close to the notorious ‘Law of Attraction’, other than one instance where the coach of the LA Lakers said they would win the championship two years in a row, and hey presto, they did. She says: “Believing that a goal is not only attainable but very likely leads people to act vigorously to achieve the desired outcome.”

Well, sometimes, but not always. Sometimes optimism means we don’t prepare properly – I’m reminded of Pompey the Great before battle with Caesar, who was so confident of victory he held a big banquet the day before the battle (which he lost). I’m also reminded of a 2003 study of maths qualifications among the students of 29 developed countries. It found that American students were the most confident in their maths abilities of the 29 countries, but nearly the worst in reality. Asian students were much less optimistic in their abilities, but actually much better than their complacent American rivals.

Sharot argues that positive expectations make us happier by ‘making us believe success is just around the corner’. Well, sometimes but not always. The Danes usually appear in happiness surveys as the happiest country in the world, and one researcher believes this is because of their low expectations – they expect each year to be worse than the last, and are pleasantly surprised when it’s not.

In one eyebrow-raising passage, Tali suggests optimism developed at the evolutionary moment we became capable of imagining death, to protect us from this terrifying prospect through ‘false beliefs’. That is a damning account of the last 5,000 years of human culture. Yes, a lot of the time humans hide the fact of mortality from themselves – as TS Eliot put it, ‘humankind cannot bear very much reality’. But a great deal of the best art, literature, philosophy and religion is an attempt to confront the grim facts of life, including death. I don’t think you can sum all that up as consolatory ‘false beliefs’. Is Hamlet consoling? Yes – but not in a simplistic and delusive way. It tries to find a tragic wisdom born out of an acceptance of the brutal facts of life. Stoic philosophy, which we briefly discussed in the show, is also an attempt to accept and adapt to the hard facts of life. So is Buddhism.

‘Ah, but isn’t adaptation very optimistic?’, Tali asked me. Not according to her definition, no. Adaptation to the facts of life is realism. Optimism, according to her, is denying the facts of life.

So I think her book has a too pessimistic view of human reason, and our ability to overcome biases – including the optimism bias. CBT helps people overcome gambling addiction, for example, by getting them to keep track of their losses to see how over-optimistic they are being. I think she has too pessimistic a view of humans’ ability to face up to the grim facts of life, and fails to take account of the role of art and culture to balance out our inherent optimism (is this, perhaps, the adaptive social role of pessimists – and are a disproportionate number of pessimists writers, artists, historians and philosophers?)

Above all, I think her book bends the messy facts of life into too simplistic a narrative (a tendency psychologists call ‘the narrative fallacy’). She fits the complexity of human experience into the overly neat little categories so beloved of social scientists – on the one side, there are optimists, who are the heroes of her story, on the other, there are the pathological pessimists, who are clinically depressed and who need help.

Why should we accept these simplistic categories and divide humanity into two neatly separated tribes? Flesh-and-blood humans defy such easy categorisation. My father, for example, has an almost apocalyptic pessimism when it comes to Arsenal football club. He is absolutely sure they will lose, every match – and is pleasantly surprised if they don’t. But in other areas of his life, he is quite cheery. My brother, to take another example, is a climate change expert – he thinks the world is heading for an environmental armageddon. But in his personal life, he is also a cheery soul. I myself am optimistic about some things, and pessimistic about others. And my positions are not hard-wired into me – they change over time.

To misquote Walt Whitman, we are large, we contain multitudes. We should resist social scientists’ attempts to herd us into pens and label some of us good and some of us bad.

Cold like the Dead: Learning Dispassion through Dissection

In 1672, a young anatomy student named Alexander Flint began doodling in his notebook (above) as his lecturer, James Pillans, droned on about the anatomical structures of the human body. In a small picture to the left of Flint’s notes appears the skeletal remains of some unfortunate soul—perhaps an executed criminal—who has in death become the object of this anatomist’s weekly lesson.

But the most interesting sketch looms above Flint’s notes, at the top of the page. It is the skeleton—not as a medical subject but as Death—with the words ‘fugit hora’ (literally, ‘the hours flee’). The skeleton may be anatomically incorrect—but these flaws are forgivable in a student who has only just begun his surgical career. In fact, it is likely because of Flint’s inexperience with the dead that we see these ‘memento mori’ (‘reminders of death’) plastered all over his early notes. He is not yet emotionally immune to the sight of dead bodies in the dissection theatre. [1]

Today, we call this ‘clinical detachment’, a term which historian Ruth Richardson points out is ‘less emotive, more scientific’. [2] It is not the by-product of a medical education; it is the goal. Those who are clinically detached are ‘objective’. They are unmoved by the sight, smell and sounds of the human body, and are therefore ‘above prejudice’ when examining it. But reaching this stage is a process, and the outcome is not guaranteed. A physician currently practising medicine in the UK remembers:

Dissecting earthworms in biology was no preparation. . . . One of the students was unable to sit through the introductory lecture, which was about scalpels and forceps, and fat and fascia, because of the thought of dissecting. And the first week that we were in the dissecting room he spent throwing up in the loo. At the end of the first week he blew his brains out with a shotgun. [3]

Although an extreme example, this story serves to remind us that revulsion, disgust and horror are all natural responses to the process of dissection. Only through constant exposure to dead bodies do medical practitioners become ‘clinically detached’.

The term ‘clinical detachment’ may be a modern one, but the concept is not new. The 18th-century surgeon and anatomist, William Hunter (right), often remarked to his students: ‘Anatomy is the Basis of Surgery, it informs the Head, guides the hand, and familiarizes the heart to a kind of necessary Inhumanity’. [4]

But what exactly did Hunter mean by a ‘necessary inhumanity’? And why did Hunter believe it was important that students cultivate this trait?

Before the discovery of anaesthetics in the 19th century, surgery was a brutal affair. The patient had to be restrained during an operation; the pain might be so great that he or she would pass out. Dangerous amounts of blood could be lost. The risk of dying was high; the risk of infection was even higher.  The surgeon was so feared that in many cases, patients waited until it was too late before approaching one for help.

Dead bodies, on the other hand, could not scream out in agony, nor would they bleed when sliced open. In this way, the novice could learn how to remove a bladder stone or amputate a gangrenous arm at his own leisure, observing the anatomical structures of the human body as he went along. Ultimately, this prepared the student to operate on the living. The French anatomist, Joseph-Guichard Duverney (1648-1730), remarked that by ‘seeing and practising’ on dead bodies, ‘we loose foolish tenderness, so we can hear them cry, with out any disorder’. [5] In other words, the surgeon gained a ‘necessary inhumanity’ through the act of dissection.

Of course, this required the novice to overcome the realities of dissection first. Even today, dissecting a human body is a dirty business. Druin Burch, author of Digging up the Dead, remembers his experiences from medical school:

Modern corpses are preserved in formalin, and the smell never quite left my hands at the end of the day…although we took care to dispose of the human parts properly, cutting someone up proved to be a messy occupation. I got half used to finding bits of fat and connective tissue later in the day, trodden onto the sole of my shoe or hitching a ride in a fold of my jeans. It made no emotional or practical sense to treat such unpleasant surprises with any sacred reverence. A half-preserved and half-rotten piece of fat trodden into the carpet cries out for no formal burial. [6]

Yet for all its ‘messiness’, dissection today is still radically different from how it was in the past. The pickled corpses, the metal gurneys, the crisp white sheets: all of this serves to dehumanise the process, and to create uniformity and regularity where it would otherwise not exist.

For these reasons, it is likely that it was more difficult in the past to remove oneself emotionally from the act of dissection. Students often complained about the ‘putrid stenches’ emanating from rotting corpses. These were not the sterilised bodies of today—with their frozen limbs and starched linens. These were bodies often plucked from the grave, two or three days in the ground. They were in semi or even advanced stages of decomposition.  They were bloated and rotten. There was nothing clinical or sterile about this experience.

Ultimately, however, students began to view the body not as a human but as an anatomical specimen. Some became so detached that they were able to cut open the bodies of relatives, as in the case of William Harvey, who dissected both his father and his sister in the 16th century.  This, of course, was highly unusual, but it does illustrate the extent to which a surgeon or anatomist could remove himself from the horrible realty stretched out before him.

In a post-anaesthetic world, the ‘desensitisation’ of medical students to the human body raises new concerns. ‘Clinical detachment’ may help students through the messy business of dissection, but is it necessarily a good trait to carry on into practice? After all, a patient is not a cadaver: he or she is a living, breathing human being who experiences a wide array of emotions and physical sensations.  One physician recently remarked:

I found myself looking at the body as a wonderful machine, but not as a creature with a soul—that worried me a bit. What in fact I had to do was consciously unlearn that sort of thing, and start to look at human beings as human beings. [7]

Of course, what has been done cannot easily be undone. And perhaps shouldn’t be.

Lindsey Fitzharris is currently a Wellcome Trust Research Fellow in the History Department at QMUL. Her website, The Chirurgeon’s Apprentice, focuses on the history of pre-anaesthetic surgery. For more, click here.

1. Edinburgh University, MS Dc 6.4. Originally discussed in Ruth Richardson, Death, Dissection and the Destitute (1987), p. 31.
2. Ruth Richardson, ‘A Necessary Inhumanity?’, Journal of Medical Ethics: Medical Humanities 2000 (26), p. 104.
3. Qtd in Richardson, ‘A Necessary Inhumanity?’, p. 104.
4. William Hunter, Introductory Lecture to Students [c. 1780], St Thomas’s Hospital, MS 55.182.
5. Patrick Mitchell, Lecture Notes taken in Paris mainly from the Lectures of Joseph Guichard Duverney at the Jardin du Roi from 1697-8, Wellcome Library, MS 6.f.134. Originally qtd in Lynda Payne, With Words and Knives: Learning Medical Dispassion in Early Modern England (2007), p. 87.
6. Druin Burch, Digging up the Dead: Uncovering the Life and Times of an Extraordinary Surgeon (2007), p. 51.
7. Qtd in Richardson, ‘A Necessary Inhumanity?’, p. 104.