Pages

Wednesday, April 16, 2014

A spectrum of sorts

General talk about views of the world can be very frustrating and unproductive. But reading this piece about the incompatibility between science and most forms of religion (and particularly the associated comment thread with its predictably divergent views) has prompted me to make a few general observations of my own.

The problem is not just that words like 'religion' are vague, but also that more technical terms like 'physicalism', 'naturalism', 'idealism', 'empiricism' and 'rationalism' are also understood in different ways by different people. Countless scholarly articles have been written defining, redefining, defending or attacking particular positions. I may have another look at some of this literature soon, if only to review and refine the terms I use to define my own stance.

But I think the issues that really matter can be set out fairly simply in the form of a continuum. Such a basic, one-dimensional picture cannot, of course, begin to cover all angles or possibilities but it does allow one to represent in a plausible and useful way some of the most important differences in the way people see the world.

At one end of the continuum you have people who don't see any justification for believing in the existence of anything other than the sorts of things with which science (and mathematics) is – at least potentially – equipped to deal, whether one is thinking of the fundamental structures and processes addressed by physics or the more complex structures and processes dealt with by other areas of science.

What people at this end of the continuum reject is the notion that in addition to the reality (or realities) studied by the sciences (including the social sciences) there is some other reality not amenable to science which impinges on our lives. Like a spiritual realm, or a transcendent moral realm, or some form of 'destiny'. The crucial issue here is that scientific approaches do not reveal behind the phenomena of the natural world (or in fact appear to reveal the absence of) any underlying purpose or goal or enveloping moral reality.

At the opposite end of the continuum you have people who embrace a view of the world which purports to go beyond the science and which incorporates spiritual or supernatural or teleological or transcendently moral elements.

At the extreme are believers in spiritual or supernatural forces which can override normal physical laws. Most well-educated religious people today, however, accept that the physical world operates as described by science and that the spiritual or supernatural realm with which their religious beliefs are concerned is – must be – quite compatible with scientific reality. Such sophisticated believers could be seen as embracing both naturalism and (a subtle form of) supernaturalism. Or, looked at another way, a natural world which is embedded in a broader, all-encompassing reality.

More towards the centre of the spectrum are those who claim to reject all forms of supernaturalism but who also reject the hardline scientific view as narrow and impoverished. Advocates of process theology (or process philosophy) come to mind in this connection, but, though they claim to reject supernaturalism and embrace naturalism, theirs is a form of naturalism which goes well beyond the usual understanding of the term.

Ordinary agnostics, who are prepared neither definitively to embrace nor to reject spiritual possibilities, would also find themselves somewhere in the centre.

The central part of the spectrum is admittedly a very ill-defined and perhaps unstable area. It is characterized more by what the individuals involved don't accept than what they do, and I tend to want to interpret their positions as at least tending one way or the other. Process thinkers, for example, for all their explicit rejection of supernaturalism, clearly tend to the religious end of the spectrum. Others, who might maintain links with religious rituals for merely social or cultural reasons for example, tend in the opposite direction, as their actual beliefs may not differ much at all from those who explicitly embrace a hardline, science-oriented view.


On a related matter, it can be argued (on historical, sociological and logical grounds) that philosophy and religion are intimately linked and, though I won't elaborate on that idea here, I think it's worth remarking that a large (and, in America at least, increasing) number of philosophers are not only anti-scientistic but also religious.

Ludwig Wittgenstein was a prominent and interesting example, not least because of the huge influence he has exerted and continues to exert. He kept his religious orientation pretty much to himself. But it was there – and it clearly motivated his philosophical thinking.

As well as his private notebooks, we have detailed accounts by a number of Wittgenstein's friends to support the view that he had strong religious tendencies and commitments. Patrick Drury's recollections are particularly important, and Norman Malcolm (another close friend) explained Wittgenstein's vehement rejection of scientism in terms of his religious orientation.

Henry Le Roy Finch has made the point that Wittgenstein was throughout his life a supernaturalist in the mould of Pascal and Dostoievsky. As well as explaining the tenor of his thinking in many areas, this religious orientation also led – more than any other single factor – to his falling out with Bertrand Russell. The gulf between their basic outlooks was just too great.

This view accords well also with that of Ray Monk who has written intellectual biographies of both men, and who, in a lecture I heard him give some years ago, emphasized not only the absolute contrast and utter incompatibility between Russell's secular outlook and Wittgenstein's essentially religious view of the world, but also the way their respective views permeated their philosophical thinking. (Monk identifies very strongly with Wittgenstein's general outlook – and does not hide his distaste for Russell's.)

Thursday, March 20, 2014

Science as a way of seeing

Attitudes to science and attitudes to language are often related. Many science-oriented people are 'linguistic revisionists'. They have a low opinion of ordinary language (because of its vagueness and ambiguity) and seek to reform it or replace it wherever possible with various formalisms. Conversely, a negative attitude to science and mathematics and logic is often evident amongst lovers and respecters of natural language (especially in literary circles for example).

But there is no reason why one cannot combine a passionate commitment to a scientific (even scientistic) view of the world with a profound respect for natural languages – these curious products of biological and cultural evolution – as objects or systems and with a recognition of what these systems are uniquely equipped to do.

To complicate matters, it's also possible to combine a commitment to the formal sciences with a passionate hatred for the physical sciences. This is a not uncommon position, actually, but one I will not deal with here.

What follows, then, are some preliminary and loosely connected notes on the differences between broadly scientific and other modes of thinking, seen in relation to language.


Reasoning and deduction can, of course, be framed in formal terms, and even natural languages can, to an extent, be seen as interpreted formal systems.

Such formal logical approaches – which don't come naturally to most of us – represent a limited but (paradoxically) revealing perspective, rather like an X-ray image, or a monochrome drawing (a landscape, say).

They have their own beauty, these approaches, but it is a spare beauty which derives from abstraction, from leaving things out – like soft tissue in the case of the X-ray, or colour and smell and sound and movement and a third spacial dimension in the case of the drawing.

Revealing and beautiful – and also useful. It was this mode of thinking that gave rise to mathematics, science and technology. And, in the mid-20th century, habits of abstract and reflexive thought finally brought formal systems themselves to life in the form of the digital computer.

But computers, as embodiments of formal thinking, suffer the limitations of formal thinking, and are not well-equipped to deal with the rich parallelism of human perceptions or the tacit knowledge implicit in ordinary human actions and interactions and language use. Their strengths are our weaknesses and their weaknesses our strengths.

What is most notable about normal human brains – in stark contrast to machine intelligence – is their remarkable ability to deal with non-abstract things, and, in particular, with the hugely complex sensory and social realms; in conjunction, of course, with natural language, the bedrock of social life and culture.

Human languages are in fact quite remarkable in their capacity for expressing the subtleties of psychological and social experience. I don't much like the word 'literature'; it's a bit stuffy and pretentious but it's the only word we've got in English that picks out and honors, as it were, texts which explore and exploit this capacity.

The word 'letters' worked in compound expressions in the relatively recent past ('life and letters', 'man of letters') but is now quite archaic. 'Belles lettres' even more so.

The adjective 'literary' is, however, neither pretentious nor archaic, simply descriptive. It can be a neutral indicator of a specific context of language use. Or it can be used to designate (often pejoratively, it must be said) a particular style or register of language use (in contrast to technical or plain or straightforward or colloquial language, for example).

In the early 20th century, the linguist (and one-time student of Ferdinand de Saussure) Charles Bally saw the need to expand the scientific study of language to encompass the subjective and aesthetic elements involved in personal expression. His notion of stylistics was further developed by thinkers associated with the Prague school – most notably Roman Jakobson, who listed the 'poetic function' as one of the six general functions of language.

[I am always wary when scholars make numbered lists of this kind (suspecting that reality is rather less amenable to clearcut categorization than the scholars would wish).

Though his overview of linguistic functions is harmless enough, Jakobson did in fact have a tendency to drive his more technical insights too hard and too far. On markedness and binarism, for instance. But that's another story.]

On the question of the possibility of a satisfactorily scientific study of style I am undecided.

Certainly, the importance of stylistic elements in actual human communication is often underestimated and communication failures are often the result of stylistic rather than substantive issues. The aesthetic element is also important in its own right (as Jakobson saw).

But scientific approaches are characterized by their narrow focus and abstractness: by what they leave out. And what they leave out is generally the subjective or experiential side of things. Twentieth century phenomenologists and others tried – and failed – to reinsert into the scientific view what had been omitted.

A supposedly 'scientific' approach (phenomenological or otherwise) could never really replace, as I see it, the informal 'close reading' of a text or spoken exchange (for example) by a perceptive reader or listener who was well versed in the language and culture (or sub-culture) in question.

Was a particular characterization plausible or a given piece of dialogue convincing? Was a particular remark witty or just sarcastic or rude? Was someone being condescending in what she said, or kind (or both condescending and kind)?

Often the answers to such questions will depend not only on non-verbal and para-linguistic factors but also on the subtle connotation of a word or turn of phrase.

Logical languages (like the predicate calculus) strip these psychological and emotional and aesthetic elements away; and all scientific language – even in the social sciences – aspires to denotation, pure and simple.

As I started out by saying, that spare, direct approach has its own beauty which stems above all from its power to make us see in a more direct and culturally unencumbered way.

You can interpret the scientific way of seeing things (which goes beyond science as such) in an almost mystical way, in fact: as a means of 'cleansing the doors of perception', of temporarily sloughing off the necessary – and necessarily arbitrary – cultural baggage of social existence.

Monday, February 24, 2014

Death and the sense of self

This is a postscript to some previous discussions on death, human identity and 'the phantom self'.

These issues are quite maddening because one feels they should be simple. But (certainly as philosophers like Derek Parfit present them) they don't seem so.

I have given my (provisional) views on all this, and one of my conclusions is that Parfit's suggestion that day-to-day survival is not what it seems, being virtually equivalent to dying and having an exact copy live on, is just wrong.

Sure, the notions of the self and identity are problematic, but our struggle for (bodily) survival is at the heart of things, surely. We know what it is to go into an unconscious state and subsequently wake up. And we can imagine – not waking up! (Foresee our own actual death, in other words.)

However, having had various private discussions on this matter, I recognize that some people see it differently from me and would be happy enough to have their bodies destroyed so long as an exact copy survived.

"But look at it from your point of view," I would say. "You go into the (transporter) machine, get scanned, lose consciousness, and that would be that. You wouldn't 'wake up' as the copy (or one of the copies if there were several). You wouldn't wake up at all. Ever. Whereas, of course, for other people 'you' would still be there. Your wife would not have lost her husband, etc.. But you would have lost your wife – and everything else."

"But this you you talk about, what is it? You speak as if it's a soul or an essence..."

Which I of course deny. But I see that my interlocutor just doesn't get what I am saying, and I begin to wonder if I am making sense.

People see these matters very differently, and I suspect that one of my interlocutors may have given an explanation of sorts when he said, "Some people just have a stronger sense of self than others."

Those with a stronger sense of self, then, would be less likely to identify that self with any copy, however exact.

You could also plausibly see a strong sense of self as being associated with a strong survival instinct (and/or egoism), and a weaker sense of self with a less-strong survival instinct. But the crucial question is: how does this translate into truth claims?

It could be that a weaker sense of self tends to obscure – or blur – the simple (and tragic) truth about death. Then again, perhaps a strong sense of self and survival instinct leads one to underestimate the equivocal and tenuous nature of the self.

The human self is a complex – and indeed tenuous – phenomenon, based as it is on cultural and social as well as biological factors. But tying its fate to the fate of the body does not entail identifying it exclusively with the body in any simple way. For the self depends on the body, even if it also depends on other things. And when the body fails, it fails.


A couple of final comments of a more general nature.

It seems clear that a straightforward scientific approach doesn't seem to work on these problems of death and identity just as it fails to work on other typical philosophical problems – like free will. Could this have something to do with self-reference?

The major paradoxes of logic are self-referential, and the problems being discussed here (and the free will problem also) have a self-referential element.

And though self-reference in logic doesn't relate to a human self but just to concepts turning back on themselves (like a set being a member of itself), there does seem to be a parallel that may help to explain the intractability of these sorts of questions.

The problems (or limitations) may, in other words, be logical as well as psychological (and so deeper).

Science aspires to an objective, third-person point of view or 'view from nowhere'. It is not undermined (though perhaps dogged at a fundamental level) by those self-referential logical paradoxes. And it can readily explain (albeit from a general, objective point of view) how first-person perspectives arise in nature – and much about them.

The first-person point of view is fine, in fact – until it starts to reflect on its own nature and make (science-like) claims about itself.

Tuesday, January 28, 2014

Nouny nouns

Most of us come up with ideas which we think are good but which we don't develop or exploit. Ideas for making money or doing good, or – as in the case I am about to describe – ideas which have absolutely no possible commercial or practical applications.

Typically, we discuss these bright ideas with trusted friends or family members and get discouraged when our interlocutors are less than overwhelmed.

So let me recycle here (to the extent that I can reconstruct it from memory) one such idea which was effectively discouraged by an old academic friend and colleague whose views on the matter I may have taken a shade too seriously. Or not, as the case may be.

It relates to the topic of animism, which I raised in my previous post on this site.

There I talked about the so-called 'mind projection fallacy' discussed by Edwin Thompson Jaynes. He talked about evidence in ancient literature and pointed out that the fallacy in question would have long pre-dated written records.

We have anthropological evidence for something like Jaynes's mind projection fallacy from studies of various non-literate cultures, but my idea was to look for evidence in the structure of language.

For our natural tendency to project human-like intelligence into non-living and non-human nature is obviously reflected in various ways in the grammar and morphology of the languages we speak or know about, and these languages (would have) not only reflect(ed) but also facilitate(d) animistic modes of thinking.

You find traces of animism even in modern English idioms such as 'the wind blows', but grammatical analysis of both verbal and nominal forms takes us much further back in time.

My intention was to focus on nouns. Willard Van Orman Quine speculated (in his Word and Object as I recall) that the most basic form of noun was the mass noun – like 'sand' – rather than the count noun – like 'hill'. The former doesn't need an article ('the' or 'a'), the latter does.

But, counter to Quine's speculations, it can in fact be demonstrated by looking at the potential for inflection – grammatical suffixes and so on – of various kinds of noun in a range of languages within the Indo-European family that the prototypical noun – the 'nounier' noun if you like – is the count noun rather than the mass noun; and, of the count nouns, animate nouns are nounier than inanimate nouns; and nouns relating to humans or human-like agents are the nouniest of all.

My intention, then, was to elaborate and refine and draw out the implications of this fact: that for many languages – including some of the oldest linguistic forms of which we have any knowledge – the nouniest nouns are personal agents.

Perhaps this idea had already been developed by others at the time I first thought of it. Perhaps it has been discussed and developed more recently. Perhaps it is just not an interesting enough idea to bother with. Or perhaps none of the above applies.

Wishing, then, to maintain – at least for a little while – a state of blissful ignorance on the matter, I am deliberately postponing any scholarly delving.

I have also refrained from mentioning the name of the linguist (now in his eighties) whose work was my jumping-off point. If his name comes up in my (or anyone else's) searching it will suggest that the territory is still relatively virgin.

Sunday, January 12, 2014

Randomness in nature

I have talked before about randomness. Somehow it seems important to know whether the world we live in is driven in part by fundamentally random processes.

Some recent findings seem to confirm (though 'confirm' is probably too strong a word) what quantum theory has suggested all along: that there are basic physical processes which are truly random.

I might also mention in this context that, in doing a bit of reading on probability and related matters, I happened to come across some references to, and a paper by, the physicist Edwin Thompson Jaynes (1922-1998). Jaynes promoted the view that probability theory is an extension of logic.

This is intuitively plausible. The concept of truth (and truth tables) lies at the heart of propositional logic, and T is, of course, equivalent to a probability of 1, F to a probability of 0. Probability theory just fills in the bits in between in a quantitative way!*

Of particular interest to me is Jaynes's notion of a 'mind projection fallacy' which he sees as a root cause of much false thinking, including what he sees as the mistaken ascription of randomness to (certain) natural events or processes.

But his case seems to suffer from an overdependence on personal intuition as well as from a lack of historical perspective. For example, he develops** his concept of a mind projection fallacy without (to my knowledge) relating it to other clearly similar or related concepts – from animism to teleological reasoning – which have been widely discussed over the last century-and-a-half.

Jaynes argues that this fallacy is evident not only in the thinking of primitive cultures and amongst uneducated people but also in scientific contexts. He uses his mind projection idea to argue against certain interpretations of probability theory and statistics as well as against certain interpretations of quantum mechanics.

The basic thought seems to be that theoreticians are all too inclined to project their perspectives (their particular states of knowledge or ignorance) on to reality. He rejects, for example, the ascription by probability theorists – and physicists, it seems – of 'randomness' or 'stochastic processes' to nature. He rejects the Copenhagen interpretation of quantum theory as a mere projection of our ignorance.

But, as I say, I find it a bit off-putting that (in the cited paper, at any rate) he not only fails to acknowledge that others have developed and discussed notions very similar to his own, but also – ironically – that he seems to sensationalize and exaggerate the significance of his own insights and intuitions.

More on the substance of his claims later, perhaps.


Let me take this opportunity to thank past readers for their interest and commenters for their comments and to wish everyone a pleasant 2014.



* Like other objective Bayesians, Jaynes sees probabability theory as a formal, axiomatic system, and the calculus of propositions as a special case of the calculus of probabilities.

** Here, for example (PDF).

Monday, December 23, 2013

The phantom self

Set out below is the core section of Gordon Cornwall's analysis of the 'phantom self' (taken from the post to which I provided a link in my previous post on this site).

But first, my brief critique.

I do go along with Cornwall (and with Derek Parfit) to the extent that they deny the existence of any substantive self. What exists are bodies which are, at a basic level, conscious of their existence as (mortal) bodies and, at a more complex (and problematic) level, subject to the illusion of a (potentially independent) immaterial self.

Planning and thinking about the future need not involve these problematic beliefs in any essential way, it seems to me. And imagining possible threats to one's well-being (and the well-being of loved ones) – which of course lies at the basis of intelligent behavior and planning – needn't lead to neurosis or anything like it.

It is true, however, that our awareness of our own mortality does, at a fundamental level, cast a long shadow and put a dampener on joy and real constraints on human happiness.

Parfit's statement (cited by Cornwall) that "ordinary survival is about as bad as being destroyed and having a replica" may be playful. But it seems to me only to make sense if you deny the existence not only of a substantive self but also of the sense of a specific self which a body generates as it 'survives' from minute to minute and from day to day.

This specific-body-generated first-person point of view is what we are, and we would prefer (under most circumstances) that it continue. I don't see how having a surviving 'copy' would allow that to happen.

Finally, Cornwall seems to misunderstand the distinction between the public, objective stance of science and the first-person perspective – which encompasses all of what he calls 'practice' as well as our subjective understanding (even when the latter is informed by science).

I just don't see any serious problems with a straightforward physicalism, at least as it pertains to the scientific understanding of the relationship between the body and the sense of self.


Cornwall writes:

"Belief in the special, separate unity of the self comes naturally to humans. It is the result of a trick of natural selection. Having a self-model is an adaptive feature of complex animals that are capable of moving around. The self-models of such animals are tightly coupled to their motivational systems, which include their emotional systems. The appearance of an immediate threat to self triggers a strong emotional response in most animals, activating the amygdala and launching a flood of psychosomatic and behavioural responses which tend to help them survive the crisis.

Humans are unlike most other animals in that, with our highly developed prefrontal cortices, we are capable of imagining and making detailed plans for the future. As part of imagining the future, we imagine ourselves in the future. Visualizing a threat to oneself in the future triggers an emotional, motivational response similar to that which would occur if the threat were actually happening on the present scene. The response is enabled by strong projections from the prefrontal cortex to the amygdala and associated limbic regions of the brain. The ability to label an imagined entity as ‘self,’ and have it trigger this kind of emotional response, is an adaptation that, perhaps more than any other, propelled our species into our present position of earthly dominance. Unfortunately, this adaptation [...] came at a considerable cost in unnecessary suffering. It is an effective design, but not a very good one. It is far from optimal, and certainly not elegant.

One way to view this idea is as another outgrowth of the scientific physicalism that has illuminated so much else. Looking at what we have learned in the past few hundred years, it is hard not to be impressed by scientific physicalism as the source of our most far-reaching and productive changes in outlook. Out of it came the demise of geocentrism. When the direction 'down' was displaced as a fundamental orientation of the universe, so our parochial planet was displaced as its centre. Ceding centre stage is always salutary; it resulted in a widening of horizons, a deeper engagement with extraterrestrial reality.

Scientific physicalism was also Darwin’s mindset. We no longer see ourselves as the pinnacle of creation, but as blood relatives of all other species on this planet, an extended family of creative solutions to the problem of life. They reflect us in countless ways, and we will learn from them for a long time to come. Understanding natural selection, we come to know that we are not the product of a perfect design process. We are beginning to see opportunities to improve on our own nature.

The productivity of scientific physicalism stems from its ontological parsimony. Science does not assume the existence of entities that are not needed to explain observations. Physicalists saw the opportunity to dispense with a fixed framework of space-time in which all objects had a position and velocity. There is no such framework; hence the insights of relativity. Physicalists do not need to assume the existence of God, either. What most people don’t quite realize yet is that the selves they imagine themselves to be can also be dropped from the scientific ontology, with a resulting gain, not loss, in its explanatory power. If you simply look at what is, then Parfit’s famous statement that "[o]rdinary survival is about as bad as being destroyed and having a replica" gains the presumption of truth, for there is no evidence for the existence of anything so mysterious as its negation implies. I should point out that Parfit’s characterization of ordinary survival as ‘bad’ is playful; this insight into what survival amounts to is all to the good. To embrace it is to escape the glass tunnel and engage with life on a broader scale and a longer time dimension, one that extends long after one’s biological death.

One more thing. My approach to this subject has been, and remains, one of intellectual discovery. I’ve always been more interested in learning the truth than in changing myself. Advocates of ‘spiritual practice’ sometimes tell me I’m doomed to failure; the truth cannot be grasped intellectually. Respectfully, I think the jury is out on that. Western philosophers in the analytical tradition have justly been criticized for mistaking their own failures of imagination for metaphysical necessity. So, too, past failures to intellectually grasp religious insights into ‘no-self’ should not be taken as proof that all such attempts in future will also fail. Scientific progress has achieved much, and will achieve much more. I don’t know of any convincing argument that science cannot leap this hurdle."

Thursday, December 12, 2013

The glass tunnel

Adrian McKinty is to blame. He started a discussion on Derek Parfit's perennially frustrating ideas on personal identity and death. You will see that I reiterated my previously-stated views* (which are similar to Adrian's own) in the course of an exchange on the comment thread.

And now I have stumbled across Gordon Cornwall's sophisticated analysis which defends Parfit's view and so implicitly challenges mine.

My intention, then, is to revisit the very important questions that lie behind these discussions, initially by reading and thinking about what Gordon Cornwall has to say. I can't reject it just because it has a mystical or religious feel which I don't like and which makes me suspicious (just as Parfit's approach does).

But first let me make a few general comments on my attitude to Derek Parfit as well as trying to set out the emotional context of my thinking on these matters.

When I first encountered Parfit's 1984 book, Reasons and Persons, I remember concluding that his view seemed inconsistent with planning and caring about one's future, with prudence basically. But Parfit himself seems to have made it into his eighth decade without any trouble – and (if his claims are to be believed) with less stress than would have been encountered had he retained his earlier, more conventional view of human identity.

My main concern, however, is not to decide which view is more conducive to longevity or quality of life, but rather to figure out which view gives the truer picture of our individual selves.

Parfit experienced his change of viewpoint on personal identity from a conventional view to one which did not privilege the future over the past – and which downplayed the centrality and perhaps even the reality of his very existence as a self – as liberating.

Previously, he had, as he put it,

"... seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of the glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others." [Reasons and Persons, p. 281]

This talk about caring for others (especially from a son of medical missionaries) makes me wary. Is Parfit merely adopting (the broad outlines of) an essentially religious outlook and rationalizing it in philosophical terms?

But before turning (in a subsequent post) to examine alternative views more closely, let me set out briefly the broad outlines and emotional drivers of my current position.

My view could be seen to be based on a narrower view than Parfit's, and aspires to an almost animal-like simplicity. ('Almost' because animals don't worry about the future – or foresee their own inevitable deaths.)

Though I doubt that my self has any substantive reality (and to this extent I may have more in common with Parfit than I am assuming here), I know that whatever reality it has is entirely dependent on the continuing existence and proper functioning of this body. Oversimplifying: I am my body.

The tragedy is, of course, that this body, like all bodies, will fail in the end. This is just how things are. Life is tragic (and comic and pathetic), and not at all bathed in sweetness and light as some religiously-inclined people are inclined to see it. From my perspective, at any rate, it seems more honorable – and more honest – to interpret life in pessimistic and uncompromising terms.

This need not entail an entirely non-religious outlook (think of Miguel de Unamuno, for example), though my approach is non-religious.

An anecdote might help explain some of my values and attitudes. Some years ago my mother had very bad pneumonia and spent a number of truly terrible weeks in an intensive care unit: close to death, hooked up to a daunting array of machines and unable to speak (because of a tracheotomy). The family was called in for a meeting with the senior doctors and nurses: they were clearly expecting her to die.

In the ICU, there was a 1:1 ratio of nurses to patients, each nurse on duty assigned to one patient only, and we visiting family members got to know some of the nurses quite well. I don't remember much of what was talked about, but I clearly remember one of them commenting that she preferred dealing with (and liked) patients who fought against death. And my mother decidedly was (and still is) such a fighter.

On more than one occasion when I came to sit by her bed when she was at her lowest ebb and hooked up to all those tubes and machines she turned and appeared to attempt to climb over the bed rails towards me. When I first witnessed this, it took a few moments to realize what she was trying to do. It was at once grotesque and sublime – and extremely moving.

I don't want to make too much of this and suggest that those who "rage against the dying of the light" are right and those who opt for more dignified options are wrong. And I fully realize that of course a nurse – especially one specializing in critical care – is going to prefer patients who don't die on her.

But speaking personally, though I admire those who decide to end their own lives when the signs are that those lives have reached a certain level of completeness, I am rather less keen on going (when the time approaches) with dignity and rather more keen on hanging around for as long as possible.


Now, having aired my general thoughts and feelings on the matter, I will try to put them out of my mind and examine what Gordon Cornwall has to say (see link above) with an open mind.



* See, for example, this post.