Monday, December 23, 2013

The phantom self

Set out below is the core section of Gordon Cornwall's analysis of the 'phantom self' (taken from the post to which I provided a link in my previous post on this site).

But first, my brief critique.

I do go along with Cornwall (and with Derek Parfit) to the extent that they deny the existence of any substantive self. What exists are bodies which are, at a basic level, conscious of their existence as (mortal) bodies and, at a more complex (and problematic) level, subject to the illusion of a (potentially independent) immaterial self.

Planning and thinking about the future need not involve these problematic beliefs in any essential way, it seems to me. And imagining possible threats to one's well-being (and the well-being of loved ones) – which of course lies at the basis of intelligent behavior and planning – needn't lead to neurosis or anything like it.

It is true, however, that our awareness of our own mortality does, at a fundamental level, cast a long shadow and put a dampener on joy and real constraints on human happiness.

Parfit's statement (cited by Cornwall) that "ordinary survival is about as bad as being destroyed and having a replica" may be playful. But it seems to me only to make sense if you deny the existence not only of a substantive self but also of the sense of a specific self which a body generates as it 'survives' from minute to minute and from day to day.

This specific-body-generated first-person point of view is what we are, and we would prefer (under most circumstances) that it continue. I don't see how having a surviving 'copy' would allow that to happen.

Finally, Cornwall seems to misunderstand the distinction between the public, objective stance of science and the first-person perspective – which encompasses all of what he calls 'practice' as well as our subjective understanding (even when the latter is informed by science).

I just don't see any serious problems with a straightforward physicalism, at least as it pertains to the scientific understanding of the relationship between the body and the sense of self.


Cornwall writes:

"Belief in the special, separate unity of the self comes naturally to humans. It is the result of a trick of natural selection. Having a self-model is an adaptive feature of complex animals that are capable of moving around. The self-models of such animals are tightly coupled to their motivational systems, which include their emotional systems. The appearance of an immediate threat to self triggers a strong emotional response in most animals, activating the amygdala and launching a flood of psychosomatic and behavioural responses which tend to help them survive the crisis.

Humans are unlike most other animals in that, with our highly developed prefrontal cortices, we are capable of imagining and making detailed plans for the future. As part of imagining the future, we imagine ourselves in the future. Visualizing a threat to oneself in the future triggers an emotional, motivational response similar to that which would occur if the threat were actually happening on the present scene. The response is enabled by strong projections from the prefrontal cortex to the amygdala and associated limbic regions of the brain. The ability to label an imagined entity as ‘self,’ and have it trigger this kind of emotional response, is an adaptation that, perhaps more than any other, propelled our species into our present position of earthly dominance. Unfortunately, this adaptation [...] came at a considerable cost in unnecessary suffering. It is an effective design, but not a very good one. It is far from optimal, and certainly not elegant.

One way to view this idea is as another outgrowth of the scientific physicalism that has illuminated so much else. Looking at what we have learned in the past few hundred years, it is hard not to be impressed by scientific physicalism as the source of our most far-reaching and productive changes in outlook. Out of it came the demise of geocentrism. When the direction 'down' was displaced as a fundamental orientation of the universe, so our parochial planet was displaced as its centre. Ceding centre stage is always salutary; it resulted in a widening of horizons, a deeper engagement with extraterrestrial reality.

Scientific physicalism was also Darwin’s mindset. We no longer see ourselves as the pinnacle of creation, but as blood relatives of all other species on this planet, an extended family of creative solutions to the problem of life. They reflect us in countless ways, and we will learn from them for a long time to come. Understanding natural selection, we come to know that we are not the product of a perfect design process. We are beginning to see opportunities to improve on our own nature.

The productivity of scientific physicalism stems from its ontological parsimony. Science does not assume the existence of entities that are not needed to explain observations. Physicalists saw the opportunity to dispense with a fixed framework of space-time in which all objects had a position and velocity. There is no such framework; hence the insights of relativity. Physicalists do not need to assume the existence of God, either. What most people don’t quite realize yet is that the selves they imagine themselves to be can also be dropped from the scientific ontology, with a resulting gain, not loss, in its explanatory power. If you simply look at what is, then Parfit’s famous statement that "[o]rdinary survival is about as bad as being destroyed and having a replica" gains the presumption of truth, for there is no evidence for the existence of anything so mysterious as its negation implies. I should point out that Parfit’s characterization of ordinary survival as ‘bad’ is playful; this insight into what survival amounts to is all to the good. To embrace it is to escape the glass tunnel and engage with life on a broader scale and a longer time dimension, one that extends long after one’s biological death.

One more thing. My approach to this subject has been, and remains, one of intellectual discovery. I’ve always been more interested in learning the truth than in changing myself. Advocates of ‘spiritual practice’ sometimes tell me I’m doomed to failure; the truth cannot be grasped intellectually. Respectfully, I think the jury is out on that. Western philosophers in the analytical tradition have justly been criticized for mistaking their own failures of imagination for metaphysical necessity. So, too, past failures to intellectually grasp religious insights into ‘no-self’ should not be taken as proof that all such attempts in future will also fail. Scientific progress has achieved much, and will achieve much more. I don’t know of any convincing argument that science cannot leap this hurdle."

Thursday, December 12, 2013

The glass tunnel

Adrian McKinty is to blame. He started a discussion on Derek Parfit's perennially frustrating ideas on personal identity and death. You will see that I reiterated my previously-stated views* (which are similar to Adrian's own) in the course of an exchange on the comment thread.

And now I have stumbled across Gordon Cornwall's sophisticated analysis which defends Parfit's view and so implicitly challenges mine.

My intention, then, is to revisit the very important questions that lie behind these discussions, initially by reading and thinking about what Gordon Cornwall has to say. I can't reject it just because it has a mystical or religious feel which I don't like and which makes me suspicious (just as Parfit's approach does).

But first let me make a few general comments on my attitude to Derek Parfit as well as trying to set out the emotional context of my thinking on these matters.

When I first encountered Parfit's 1984 book, Reasons and Persons, I remember concluding that his view seemed inconsistent with planning and caring about one's future, with prudence basically. But Parfit himself seems to have made it into his eighth decade without any trouble – and (if his claims are to be believed) with less stress than would have been encountered had he retained his earlier, more conventional view of human identity.

My main concern, however, is not to decide which view is more conducive to longevity or quality of life, but rather to figure out which view gives the truer picture of our individual selves.

Parfit experienced his change of viewpoint on personal identity from a conventional view to one which did not privilege the future over the past – and which downplayed the centrality and perhaps even the reality of his very existence as a self – as liberating.

Previously, he had, as he put it,

"... seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of the glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others." [Reasons and Persons, p. 281]

This talk about caring for others (especially from a son of medical missionaries) makes me wary. Is Parfit merely adopting (the broad outlines of) an essentially religious outlook and rationalizing it in philosophical terms?

But before turning (in a subsequent post) to examine alternative views more closely, let me set out briefly the broad outlines and emotional drivers of my current position.

My view could be seen to be based on a narrower view than Parfit's, and aspires to an almost animal-like simplicity. ('Almost' because animals don't worry about the future – or foresee their own inevitable deaths.)

Though I doubt that my self has any substantive reality (and to this extent I may have more in common with Parfit than I am assuming here), I know that whatever reality it has is entirely dependent on the continuing existence and proper functioning of this body. Oversimplifying: I am my body.

The tragedy is, of course, that this body, like all bodies, will fail in the end. This is just how things are. Life is tragic (and comic and pathetic), and not at all bathed in sweetness and light as some religiously-inclined people are inclined to see it. From my perspective, at any rate, it seems more honorable – and more honest – to interpret life in pessimistic and uncompromising terms.

This need not entail an entirely non-religious outlook (think of Miguel de Unamuno, for example), though my approach is non-religious.

An anecdote might help explain some of my values and attitudes. Some years ago my mother had very bad pneumonia and spent a number of truly terrible weeks in an intensive care unit: close to death, hooked up to a daunting array of machines and unable to speak (because of a tracheotomy). The family was called in for a meeting with the senior doctors and nurses: they were clearly expecting her to die.

In the ICU, there was a 1:1 ratio of nurses to patients, each nurse on duty assigned to one patient only, and we visiting family members got to know some of the nurses quite well. I don't remember much of what was talked about, but I clearly remember one of them commenting that she preferred dealing with (and liked) patients who fought against death. And my mother decidedly was (and still is) such a fighter.

On more than one occasion when I came to sit by her bed when she was at her lowest ebb and hooked up to all those tubes and machines she turned and appeared to attempt to climb over the bed rails towards me. When I first witnessed this, it took a few moments to realize what she was trying to do. It was at once grotesque and sublime – and extremely moving.

I don't want to make too much of this and suggest that those who "rage against the dying of the light" are right and those who opt for more dignified options are wrong. And I fully realize that of course a nurse – especially one specializing in critical care – is going to prefer patients who don't die on her.

But speaking personally, though I admire those who decide to end their own lives when the signs are that those lives have reached a certain level of completeness, I am rather less keen on going (when the time approaches) with dignity and rather more keen on hanging around for as long as possible.


Now, having aired my general thoughts and feelings on the matter, I will try to put them out of my mind and examine what Gordon Cornwall has to say (see link above) with an open mind.



* See, for example, this post.

Thursday, November 21, 2013

Science and self-effacement

Famously – or perhaps notoriously – Steven Jay Gould proposed that science and religion constituted non-overlapping magisteria. In my opinion, his claim was not plausible; but a similar claim regarding the sciences and the arts does stand up.

I want to focus here on the issues of self-expression and collaboration.

Individual and creative thinking plays an important role in science, but it involves a form of creativity which is far removed from the sort of creativity which applies in the arts. The latter is always associated with self-expression; whereas self-expression has no role to play in science.

So self-expression can be seen not only as a key demarcation criterion between the arts and the sciences but also as an indicator that these pursuits are opposites, incompatible, non-overlapping. It is a crucial part of the one, and plays no part in the other.

Collaboration, on the other hand, occurs in both the arts and the sciences. But it is an essential – and defining – feature only of the latter.

The vast majority of the greatest works of literature, music and the visual arts are attributable essentially to one man or woman. The artist draws, of course, on his or her teachers and the broader culture but in a real sense owns – as author or creator – the finished product.

Similar notions can apply even to necessarily collaborative arts like the cinema. Think of the director, Alfred Hitchcock. The best of the early films he made in England have the same winning combination of suspense, latent eroticism and humor as his American masterpieces even though he was working with entirely different people in a very different cultural context.

The arts are by their nature self-expressive, even if the expression is often, as in theatre, cinema, etc., group-based or, as in much medieval art for example, anonymous. But even in these cases, I would argue, the greater works will be more likely to bear the stamp of an individual genius or personality.

Science is just not like that. It is the antithesis of self-expression, and is all about building a common body of knowledge. To the extent that the individual's ideas are deemed to be important, to that extent the science is undeveloped and uncertain. As a science matures all traces of pioneering individual contributions are erased or at least merged into a greater, more complex and more subtle body of knowledge than any single mind could even begin to comprehend.

There was an interesting exchange a while ago on a comment thread at Rationally Speaking about the nature and the scope of science which has a bearing on this point. A German botanist working in Australia was arguing that science is concerned with everything empirical and is defined primarily in terms of its communal nature.

"... [I]t is not science if I personally figure out whether Craspedia species are apomictic. I have to share this information in a way that allows other humans to test it, reproduce it, and build on it, because science is a community effort. But then it would be science no matter how trivial the fact."

Though not everyone will see the collaborative side of science as a key defining feature – another commenter calls it "unusual" as a demarcation criterion – science has, in my opinion, an essentially communal, individual self-erasing nature. (It imposes self-effacement, as it were.)

This criterion also fits mathematics. You get untutored geniuses (like Ramanujan) but it's only when they are integrated into the mathematical community (as Ramanujan was, thanks to G.H. Hardy) that they become real mathematicians.

Thursday, October 24, 2013

Myths with pretensions

I commented recently – in the context of a post about myths relating to race and (Jewish) identity – that one of the things I like about science is its myth-destroying power.

And science (broadly construed to take in the historical sciences) certainly does have that power. But it is – I readily admit – a strangely disturbing power. It goes against the grain of human psychology and culture which is irredeemably myth-ridden.

So when I said I 'like' that aspect of science, I was oversimplifying – leaving out the sense of ambivalence.

Let me give an illustration based on the final couple of years of my religious phase which relates not just to myths but to metaphysicalized myths – myths, if you like, with pretensions.

Two kinds of thinker appealed to me, but each in a different way.

On the one hand were those who distilled the essence (as they saw it) of the Christian myth and offered a deeply satisfying (for those who could accept it) way of relating to the challenges of life which incorporated a very deep, intuitive but historically-validated understanding of human psychology. For me, these thinkers were largely those in the Protestant (and particularly the Reformed) tradition who embraced Paul's emphasis on the absolute power of God. My favourite was Karl Barth.

But I was also attracted to a completely different kind of scholar – more scientifically- and historically-oriented – who offered none of that psychological comfort, but who offered another kind of liberation entirely. Rudolf Bultmann spoke of demythologizing the faith, but what he was doing was simply reinterpreting the old myths. More convincing were those who didn't talk about faith at all but who sought merely to elucidate the historical background of the New Testament. And the more I understood that background, the less plausible the Christian interpretations (and myths) came to seem.

But when one gets rid of one myth another will often arise to take its place. Social and political myths, for example, often take the place of religious ones.

Our brains have a special affinity for simple narratives which is explicable no doubt – at least in part – in terms of the need to generate the sense of a coherent, continuing self. We also have a strong tendency (which manifests itself in the grammars of natural languages) towards animism – seeing even inanimate nature as exhibiting human-like intentions and purposes. And though we have come to accept science's non-teleological explanations, many still shy away from these as ultimate explanations. For example, there is resistance to the view that randomness is, as modern physics suggests, a fundamental characteristic of reality.

In a sense, science is – and always will be – an unnatural activity, and the scientific worldview is a peculiarly unsatisfying one. Those of us who are committed to a scientific view of the world will always be, I fear, to some extent at war with our own natures.

I have also been thinking about mathematical Platonism again recently. Though such a view is essentially a timeless one (and so renounces narratives in the normal sense), it may still be seen to incorporate elements of myth (and teleology) as well as metaphysics. How else would it manage to exert such a strong emotional attraction (as it clearly does for many)?

The question of the plausibility of mathematical Platonism (or realism) is so important because it impinges on broader questions, such as the viability of an empiricist worldview. In fact, mathematical realism can be seen – and is seen by many – as posing major challenges not only for empiricism but also for physicalism.

And, as my instincts are (for want of a less abstract way of characterizing them) deeply empiricist and physicalist, I need to settle on a particular view of mathematics and see if, or to what extent, I will be forced to modify the basic way I see the world.

I am quite resigned to the fact that I will never entirely escape mythical thinking, but my goal is – if possible – to rid myself completely of the grand, intellectualized and metaphysicalized kind of myth and settle instead for the humble and commonplace variety.

Like the perennially-appealing prospect, mooted in a famous section of Homer's Odyssey and revived in the 18th and 19th centuries, of retiring to an exotic island paradise and drifting extremely slowly into a peaceful and uncomplicated old age.

Monday, September 23, 2013

Anti-metaphysical musings

I have been looking recently at some material relating to "the metaphysics wars", and thought it worthwhile to jot down a few notes.

No doubt, my general position would be characterized by those with other views as scientistic. It is also anti-metaphysical in that I don't see the traditional philosophical discipline of metaphysics as having much point these days.

I don't deny that there are very interesting questions in the philosophy of physics, the philosophy of mathematics and the philosophy of logic which may be characterized as metaphysical. The meta-thinking that goes on at the margins of physics, other sciences and mathematics, etc. is necessary and valuable.

But somehow, when such thinking moves away from the discipline in question and becomes more generally philosophical, problems arise.

Timothy Williamson is perhaps the most powerful and impressive advocate for this broader kind of metaphysics (and analytic philosophy generally). As an avowedly non-religious person, he can't be dismissed as having ulterior motives of a religious nature; and, being at home with formal – and specifically modal – logic, he can't be dismissed as natural language-bound or as being daunted in any way by technical rigor.

Some of the points he makes in this interview are good ones – such as noting the light that modal logic can undoubtedly throw on the workings and nature of natural language (via Montague grammar, for example), and perhaps also on the foundations of set theory – but I have to say that I am strongly inclined to reject the basic thrust of his argument in defense of metaphysics, and, by extension, philosophy (as he understands it).

Essentially the questions he seems most interested in are reminiscent of medieval scholasticism. I too have great respect for thinkers such as Avicenna (to whom he refers approvingly) and respect also for more recent – and more mathematically sophisticated – exponents of that general tradition of thought (such as Bolzano, to whom he also refers), but it seems to me that it is now incumbent upon any thinkers who aspire to deal with questions of what there is in a fundamental sense to base their accounts – at least in large part – on contemporary physics; or on mathematics if they are restricting their focus to mathematical realities.

Williamson seeks to defend the relative independence of his core preoccupations from science by invoking the old shibboleths, scientism and reductionism, and rejecting naturalism as a confused and inadequate concept.

I grant that mathematics does pose problems for advocates of strong forms of naturalism and empiricism, and there are real unresolved issues in the philosophy of mathematics. But my preference is to address these issues in a broadly scientific and mathematical context rather than in a purely logical or philosophical one, or – worse – not to address them at all and instead merely to use them as a kind of justification or license for logical excess and metaphysical self-indulgence.

Williamson cites Quine as an example of scientistic naturalism.

"Quine privileged natural science, and in particular physics, over all other forms of inquiry, to the point of not taking very seriously any theory that couldn't be reduced to part of natural science."

Williamson's view, by contrast, more or less allows the analytic metaphysician carte blanche, and Williamson's own approach to analytic metaphysics is clearly – in my view at any rate – insufficiently constrained and guided by science.

Here, for example, is an extract from an old interview in which he explains his developing views:

"My work on vagueness and ontology doesn’t really concern ontology. Probably my most distinctive ontological commitment comes from my defence of a controversial principle in logic known as the Barcan formula, named after the American logician Ruth Barcan Marcus, who first stated it. An application of this principle is that since Marilyn Monroe and John F. Kennedy could have had a child (although they actually didn’t), there is something that could have been a child of Marilyn Monroe and John F. Kennedy. On my view, it is neither a child nor a collection of atoms, but rather something that merely could have been a child, made out of atoms, but actually has no location in space and time. The argument can be multiplied, so there actually are infinitely many things that could have been located in space and time but aren’t. It takes quite a bit of work to show that the Barcan formula is better than the alternatives! That’s what my next book will be on. The working title is Ontological Rigidity."

The book was actually called Modal Logic as Metaphysics, and this is how he recently stated its main point:

"I am ... saying that it is necessary what there is. Necessarily everything is necessarily something. There could not have been more or fewer things than there actually are, and which particular things there are could not have been different. What is contingent is only what properties those things have, and what relations they have to each other. I call that view necessitism. Its denial is contingentism. Who knows how far back necessitism goes? Maybe Parmenides was some sort of necessitist..."

On the face of it, talking about (apparently countable) things (minus their properties and relations!) as given strikes me as breathtakingly naïve in the context of a physics-based understanding of reality. I can only imagine that Williamson is – like the medieval scholastics – implicitly asserting a privileged role for logic.

Quine's assertion of a privileged role for physics makes a lot more sense to me.

Admittedly I haven't looked at Williamson's ideas in any depth, but what I have seen so far – and what he says in this latest interview – really makes me question whether it would be worth the effort. I am intrigued, however, by what is driving such thinkers.

Strangely, Williamson appears not to be quite sure whether his latest work is meaningful or not – or at least seems unwilling to commit himself on the matter. There is (don't you think?) just a touch of arrogance in this passage (from Chapter One of Modal Logic as Metaphysics)?

"This book compares necessitism and contingentism. Which is true? Of course the question has a false presupposition if the definitions of 'necessitism' and 'contingentism' lack meaning or content. But if every enquiry must first establish its own meaningfulness we are on an infinite regress, since the enquiry into the meaningfulness of the previous enquiry must first enquire into its own meaningfulness, and so on. Better to act on the assumption of intelligibility: readers can decide for themselves whether they understand the book as they go along, and recycle it if they don't."

This passage is a combination of facile reasoning and rhetorical sleight of hand. By using the word 'understand' in the final sentence, he subtly shifts the focus to the reader's possible inadequacy and away from the original question concerning the work's meaningfulness.

In fact, I am tempted to see Williamson's work as emblematic of a broader trend. On the basis of my (admittedly limited) knowledge of the history of the relevant intellectual cultures, I discern, since the middle years of the 20th century, a disturbing falling off in intellectual seriousness in secular circles accompanied by an equally disturbing rise in anti-scientific name-calling and credulity amongst those thinkers who remain favorably disposed towards religion.

I'll finish here with a few comments about Paul Horwich, Williamson's great philosophical antagonist, whose deflationary views on truth I have referred to favorably in the past.

Horwich is opposed to the sort of traditional theoretical philosophy ('T-philosophy') which Williamson defends. I have made the point that, though I broadly accepted Horwich's account of truth, I doubted that his Wittgensteinian view of philosophy was compatible with a continuation of philosophy as an academic discipline. And, interestingly, Williamson makes a similar point in the recent interview.

"...Horwich didn’t explicitly call for T-philosophy not to be funded. I pointed out that if the picture of philosophy in his book were accurate, philosophy should be abolished. The reader encounters just two sorts of philosophy: irrational T-philosophy, and level-headed Wittgensteinian debunkers of T-philosophers. Philosophy is presented as an activity in which some people make a mess and others clear it up. Why on earth should taxpayers fund that? It looks as though we’d be better off simply abolishing the activity altogether."

Finally, I was surprised (and a bit disappointed) to learn recently that Horwich rejects naturalism, and even more unequivocally than Williamson does. He cites not only mathematical but also moral claims as a basis for his view.

Horwich is more thoroughly Wittgensteinian than I had previously thought.

Wednesday, August 28, 2013

Ideology and science

Science is not – nor can it be, in fact – immune to ideological influences. Sometimes such influences may have a positive effect, but it would be naive to believe that such factors do not have the potential to cause distortions also.

Scientists, like anybody else, need to be motivated and often this involves them seeing their own research as defending or furthering broad convictions they might have about human nature or the world in general.

There are many cases of great scientists whose major contributions to science were largely inspired by what we now see as utterly false assumptions. Copernicus and Newton might both be seen as examples of this, their discoveries as it were transcending the flawed intellectual matrix – or worldview – within which the theories were framed.

The institutions and practices of modern science are not designed to screen out personal biases and unwarranted assumptions so much as to ensure that published conjectures and theories and experimental results are exposed to rigorous testing and assessment procedures. The system works pretty well on the whole, encouraging intellectual rigor while not excluding the human element – imagination, creativity, etc. – which is essential for innovative thinking.

Areas such as evolutionary biology and the human sciences are particularly prone to ideological influences.

I have previously hinted at such influences in the case of research into linguistic development and evolution, notably in relation to the work of Michael Tomasello and his colleagues who seem to be adamantly opposed to certain formal approaches to the study of language. I am following up on this, and will have more to say in the future. (James Hurford's views appear to chart a sensible middle course, and are looking very plausible to me at the moment.)

And I have recently come across another example of ideology apparently driving scientific judgment and interpretation.

Last week Massimo Pigliucci published a list of his 'best' research papers on biological topics. It's clear from this list (and another on his Curriculum Vitae) that Pigliucci had from the beginning of his research career a special interest in defending and promoting the notion of phenotypic plasticity – the property of the genotype to produce different phenotypes in response to different environments.

In just about all the cited papers – most involving experiments with plants – the power of environmental factors to alter features of the organism are emphasized. A cursory look at the abstracts certainly suggests that the researchers (the papers are collaborative efforts) are highly unsympathetic to any approaches which could be construed as tending in the general direction of what has sometimes been characterized as genetic determinism.

Which is fine. It's only to be expected that researchers will approach such issues with strong opinions, and a degree of adversarial debate and discussion can be productive. In the end, the weight of evidence usually settles disputes, and the controversies then move on to other areas.

So I am not questioning the scientific value of Pigliucci's work – the scope and nature of phenotypic plasticity is clearly a topic of considerable interest.

But it is interesting to juxtapose his research interests in biology with his published comments about human intelligence.

In another of his recent blog posts, Pigliucci claims that environmental – cultural, in fact – factors are solely responsible for differences in patterns of involvement by males and females in different research areas. Genes don't have anything to do with it, apparently.

"[T]he fact," he writes, "that there are fewer women than men in a given field is likely the result of a large number of cultural factors (no, I don’t think it has anything at all to do with “native” intelligence, Larry Summers be damned)."

A commenter makes the point that "the greater variance of male intelligence is well established", and that genetic factors are obviously involved. The greater variance of male intelligence in this context means essentially that there is a greater proportion of individuals with very high intelligence amongst men than amongst women (and also a greater proportion of individuals with very low intelligence).

It is not impossible that some purely environmental explanation for this pattern could be found, but the evidence, even if it is not conclusive at this stage, certainly points to an at least partly genetic explanation. So the fact that Pigliucci seems to have a very strong disinclination to accept that genetics is significant here clearly goes beyond the science and points to a prior ideological commitment.

The emotional tone of his references to Lawrence Summers may not strengthen but certainly doesn't weaken my case. "I can't stand the bastard," Professor Pigliucci notes in a comment.

Pigliucci's strong ideological and moral convictions – which no doubt played a part in his decision some years ago to shift his focus from science to philosophy – may be able to be explained largely in terms of cultural factors.

But I just can't help thinking about Massimo's (hypothetical) monozygotic twin who was raised by a Swedish family. Did he too follow a scientific career? Does he have a penchant for bow ties? Is he a religious skeptic? Does he too have strong views on political and social questions? And what is his attitude to Lawrence Summers, I wonder?

Sunday, August 11, 2013

Life, death and computation

I have been spending a bit too much time lately reading other people's blogs and (to some extent) participating in associated discussions. The main problem with this sort of activity is that – largely because the focus of discussions is always shifting – it encourages superficial debate at the expense of deep understanding.

But, interestingly, two recent blog discussions on two very different sites which I happen to follow touch on a similar theme.


Biologist and philosopher Massimo Pigliucci recently precipitated a freewheeling discussion of the relevance of computers and computing to understanding the human mind and the universe in general. In fact, Pigliucci's post on the topic prompted more than 200 comments, many of which are well worth reading.

Professor Pigliucci has a disarming tendency to rush in where more cautious academics fear to tread – that is, beyond his areas of specific expertise. (I suspect his approach owes something to the intellectual traditions of his native Italy, where academics have traditionally played an important role in the broader cultural, moral and political sphere.)

Pigliucci argues strongly against functionalist and computational views of the mind. I don't have strong views on this question, though I share Pigliucci's skepticism about some of the (as I see it) wilder claims about mind uploading and the scope of simulations etc.

I did, however, question his contention that seeing the operations of nature in computational terms is likely to lead to mathematical Platonism, commenting as follows:

My understanding is that many of the leading proponents of an information- and information processing-based approach to physics see information as physical. The bits or qubits are always 'embodied' in actual physical processes, albeit that these processes are understood at a deep level in terms of the processing of information. (There are close parallels between information theory and thermodynamics.)

So I'm not sure that such a view leads to Platonism. Seeing physical processes as algorithmic (and scientific theories as predictive algorithms) seems to me a genuinely interesting perspective: but it may well be that there is no way actual physical processes can be perfectly simulated (or predicted).



Adrian McKinty is a novelist with a strong interest in social, cultural and philosophical topics. In the comment thread of a post at McKinty's nicely named site, The Psychopathology of Everyday Life (I know – Freud got there first), a post about Philip Larkin featuring his confronting poem, 'Aubade', McKinty mentions Nick Bostrom's simulation argument: that if we accept two fairly plausible-seeming assumptions then our universe is almost certainly a 'simulated' universe created by an advanced civilization.

As I commented there:

I am ... (prompted by your comments, Adrian) having a look at Nick Bostrom's ideas. My initial attitude is skepticism, but that may just be what he would call my status quo bias jumping in.

I do think it makes sense (simply in terms of physics) to see natural processes in terms of information processing, but it is a big jump from there to thinking about beings who might have set the process going (and to calling it a simulation).

And what would Larkin make of all this? (Turning in his grave, I suspect.)


I am continuing to look into the simulation argument which I first encountered some years ago. More later, perhaps.

But regular readers will know that I am very skeptical of arguments and points of view which take their origins from a philosophical (as distinct from a scientific) base. Bostrom's main argument for the simulation hypothesis is in part statistical but basically philosophical – and far from convincing from my point of view.

I can't help feeling that people like Bostrom (and David Pearce who influenced him) are driven by a kind of religious instinct. Certainly some of the groups with which they are associated have a cultish feel.

The other thinker mentioned by Adrian in the comment thread is Samuel Scheffler. Scheffler applies 'what if' scenarios to thinking about death. What if we knew the world was going to be destroyed soon after our death? His general point seems to be that we are underlyingly less concerned about our own personal fate per se than about our fate seen in the light of a continuing social context.

This may well be, and such thinking is very much in accordance with the view that the sense of self derives from the linguistic, cultural and social context in which we grow up. But I think Scheffler overplays the extent to which future generations give meaning to our lives.

Also, I had a look at Scheffler's background. And it seems pretty clear that his being a socialist (he is apparently a disciple of the 'analytical Marxist' G.A. Cohen) would have – to some extent at least – shaped and played a part in his approach to thinking about the future in general, and about ethics.

Thursday, July 25, 2013

Empathy and language

The practice of pointing by infants raises some interesting questions about the psychological foundations upon which human communicational and linguistic capacities are built.

As explained in an article cited in the comments section of the previous post, young children routinely point to direct the attention of a nearby adult to something the infant finds interesting and apparently wishes the adult to see and appreciate also.

When an infant doesn't start pointing by the appropriate age (about 12 months), it's often a sign that they don't have an intuitive sense of other minds – and also of linguistic problems ahead. (I originally came across discussions of this phenomenon in material on identifying the early signs of autism.)

The article referred to above draws on papers by Michael Tomasello and his colleagues which explore the phenomenon of infant pointing and associated behaviors. Tomasello and his fellow researchers argue for "a deeply social view [of the process] in which infant pointing is best understood – on many levels and in many ways – as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication."

The researchers note that the kind of pointing they discuss is unique to humans and depends on certain key insights about the existence and nature of other minds as well as emotional factors – essentially a desire to share one's perceptions and to share in the perceptions of others.

A cursory reading of sources cited in the Slate article and related material suggests to me that Tomasello and his colleagues may well be overplaying their intuitions about sharing in their claims about the origins and development of human communication and language.

Of course, emotional factors cannot be ignored, but could not these elements be explained in terms of cognitive imperatives and the practical benefits of collaboration and reliable information transfer?

György Gergely and Gergely Csibra explicitly challenge Tomasello's views on the centrality of the emotions associated with shared intentionality and focus instead on the communication mechanisms necessary to ensure efficient cultural learning.

A crucial point relates to the efficacy of the highlighted emotions. Tomasello and his colleagues posit the desire to share emotional states as a key explanatory factor rather than merely as one element in a diverse suite of human abilities and behaviors.

But I am nowhere near having a sufficiently strong grasp of the material to take sides in this dispute.

It is clear that the same (or similar) perceptions and feelings which apparently motivate gestural communication – however we might characterize them – certainly do seem, in normal infants, also to motivate and facilitate the child's rapid and apparently easy acquisition of whatever language or languages they are routinely exposed to.

Significantly, though, the complexities of language can be learned (albeit often with some difficulty) even by those who lack a strong intuitive sense of other minds.

It's certainly plausible that the historical development both of prelinguistic modes of communication (like pointing) and language amongst our ancestors was dependent upon (amongst other things) certain empathetic perceptions and feelings. But, of course, the cognitive and affective factors involved are in practice always inextricably linked, sometimes in very complicated ways.

In his work on autism, Simon Baron-Cohen distinguishes between the cognitive and affective aspects of empathy. Cognitive empathy is all about what we perceive and understand about the mental states of others, whereas affective empathy concerns our emotional responses to this knowledge. Strength or appropriate responses in one area does not necessarily entail strength or appropriate responses in the other.

For example, the autistic person typically scores poorly on tests of cognitive empathy (e.g. reading particular emotions in pictures of faces cropped to reveal little more than the eyes), but often exhibits appropriate affective responses (e.g. to perceived suffering). By contrast, the psychopath typically has no problem at all with cognitive empathy (or language, for that matter), but displays deficiencies in terms of affective response.

Speculations about the way language evolved will necessarily draw on the findings of cognitive and developmental psychology as well as other areas. But, while it is reasonable to assume that affective responses played a role in the development of language, I have some doubts about the way Tomasello and his colleagues present the basic issues and about some of their key claims.

Also, as someone with a background in formal approaches to language and syntax, I am naturally wary of approaches which downplay the significance of this side of things. I was unimpressed, for example, by the comments by one of Tomasello's co-researchers, Malinda Carpenter, quoted in the Slate article.

The fact that pointing seems to call on a sophisticated understanding of what is going on in the heads of other people, she noted, "suggests that [infants] can do so much more with pointing prelinguistically than we ever thought before."

Until recently, people thought that this sort of knowledge only emerged with language. But when Carpenter, who was drawn to this work through an initial interest in language, started looking at prelinguistic gestures, her perspective changed.

"[E]verything’s already there!" she said. "I completely lost interest in language because you can see so much complexity already in infants' gestures."

It depends on what you mean by 'everything', I suppose, but I would have thought that language adds a little something to the mix.

Monday, July 8, 2013

A science of language?

A large part of the fascination which language holds for many is that it is one of the key markers of our humanity. Language is at the heart of human culture and human consciousness. Tense and aspect mark our sense of time, grammatical mood our sense of possibility, personal and possessive pronouns our very sense of identity and how we see ourselves as relating to other people and things.

Partly because language is an inextricable and defining part of us – and at once social and individual – it is impossible to clearly define a science of language in the way most other sciences can be defined.

To what extent should the study of language be subsumed into psychology and neuroscience? Language is behaviour, and the human language faculty can only be said to be understood to the extent that the neurological processes which drive it are known.

On the other hand, language is also a cultural object which can be studied in its own right, both structurally and historically.

It's hardly surprising, then, that, since its rise to prominence in the 19th and 20th centuries, linguistics has, as sciences go, been unusually riven by competing frameworks and approaches, and these divisions have, if anything, increased over time. (Though I sometimes wonder how different things might have been if the later-20th century's most prominent linguist had not been such a relentless intellectual warrier and contrarian!)

Ultimately, the divisions between the sciences are merely for practical and administrative purposes: the quality – and worthwhileness – of research is not generally determined by discipline-specific but rather by more general criteria.

But I don't want to get into an abstract discussion about the unity of science or related matters. I really just wanted to make the point that language represents not so much a subject area as a number of interrelated subject areas. And, because the phenomenon of language can be approached from very different directions, it is difficult, if not impossible, to pull all these perspectives – and the knowledge implicit in them – together.

Perhaps, then, the best we can do is to focus on specific questions which may happen to relate to language in one way or another and to renounce as unrealistic the desire for a comprehensive understanding of the phenomenon of language per se.


I'll finish by mentioning a couple of language-related topics which I have been thinking about lately.

Last month I referred to the ideas of Simon Fisher and Matt Ridley on culture-driven gene evolution. The work of Fisher and others has shown that the FOXP2 gene has a crucial role to play in human linguistic abilities. The gene occurs in other species in slightly different forms and it plays various roles. Interestingly, it has been shown to play a key role in vocal expression in both birds (canaries and finches) and chimpanzees as well as in humans. Neanderthals are now believed to have had exactly the same form of the FOXP2 gene as modern humans.

I can't help thinking that the question of the origin of language retains its fascination in part because it promises to reveal something important about who we are and where we came from.

This is, I think, largely an illusion based on the idea that the abrupt discontinuity we see between ourselves and our nearest relatives (chimpanzees) always was. But intermediate forms did exist (until relatively recently, in fact).

In practice, I think we tend to assume, consciously or unconsciously, that our species has an essence.

It hasn't. Nonetheless, the development of human language as we know it does mark a clear historical and cultural discontinuity.

On a more practical note, I have also been thinking about the reputed benefits of bilingualism. It has been claimed, for instance, that bilingualism can delay the onset of the symptoms of Alzheimer's disease by about five years. I have some reservations about the significance of these claims. More another time.

Monday, June 17, 2013

The adjective not the noun

I – and others – have been reflecting lately on the concept of political conservatism, and these reflections have prompted some inchoate – and totally non-partisan – meta-thoughts on the problems of political ideology which I have set out below.


One assumption behind most reflections on conservatism (or on any political ideology) is that it is desirable to have a consciously worked out (personal) political philosophy. And the assumption behind this is that it is possible somehow to assess alternatives in a rational manner and arrive at a satisfactory conclusion. This latter assumption – on which the value of the whole exercise depends – I am beginning to doubt.

When you reflect on these matters, you have to start somewhere. And where you start will be somewhat arbitrary, though it may well be in part determined by your values.

For example, you may want to maximize equality; or you may be more concerned with individual freedom; or order, or one of any number of other ideals or goals.

My starting point – reflecting perhaps the importance I place on a scientific view of the world free of metaphysical and religious baggage – would be the social nature of human identity.

Even those who think they have totally rejected the idea of a soul still cling, I believe, to a version of this idea. It is a natural belief for us to have, and I still feel it in myself.

Take this simple thought experiment. A human body could, presumably, be grown in a laboratory, nourished and exercised to develop muscles, etc. But, if it were deprived of all normal social interactions, linguistic and other cultural input, the brain would not develop normally and this body, though apparently perfectly formed and healthy, would not, as a result, constitute a person. It would not have a human identity, or human awareness. What rights would it have, if any?

This idea of a living human body with a radically undeveloped brain (due to the withholding of social inputs during development) is – to me at any rate – slightly shocking and confronting. It tells us something about ourselves: that our sense of self, our human identity comes just as much from without – from a particular social and cultural milieu – as from within. The social matrix within which we grow is an essential component of our individuality and our very humanity. We never were and can never be 'self-contained'.

This fact has implications for any social or political philosophy. I won't try to spell out the implications here, except to say that such a view is fatal for all forms of atomistic individualism.

Values, as well as often determining the starting-point for one's basic thinking about politics, also play a part in determining the direction of the argument. And this basic notion of the social self could clearly be developed in either a progressive or a conservative direction. The choice seems to depend on taste or predilection.

Which leads me to wonder whether developing such thoughts and arguments is worthwhile (other than for polemical or similar purposes).

Moral, social and political reflections and arguments move in a linear fashion like language. In fact, the thoughts only really crystallize when spoken or written down. But, clearly, this linear process does not do justice to our deepest values which are multidimensional. Arguably, such a process cannot represent our values accurately, much less enable us to assess or justify them.

We can, of course, describe, catalogue and consider the various political outlooks which others have elaborated and defined, seeing them as more or less internally consistent and competing frameworks. But, unfortunately, all these frameworks are – necessarily – highly simplified conceptual structures which are inadequate not only as models for how the (social and political) world works (or could work), but also as representations of the actual political beliefs and values of individuals and groups.

They are arguably post hoc rationalizations, and their main function, you could say, is to faciltate the formation of, and deepen solidarity within, social and political groupings. Part marketing tool, part reinforcement mechanism.

What I am saying essentially is that such frameworks are inevitably inadequate as serious belief systems.

But, though the various –isms are no good, the adjectives from which they derive do real and important work. So I think one can still usefully talk about conservative approaches to social, political and other questions, and distinguish them from, for example, liberal (or progressive) approaches.

Increasingly I see these matters in terms of individuals having – due mainly to various genetic and developmental factors – different psychological profiles and personality traits. These differences can, of course, be mapped and defined in different ways, but something like a conservative/progressive or conservative/radical contrast will, I think, continue to be a feature of models of human personality and cognition.

Wednesday, June 5, 2013

Necessary freedom


The mathematician G.H. Hardy – most famous amongst the general public for his having 'discovered' the self-taught prodigy Ramanujan – said that the only other career that might have suited him was journalism.

When I first read this it surprised me, even bearing in mind the fact that journalism in early 20th-century England was very different from journalism today.

Clearly Hardy could write – his short book, A Mathematician's Apology, is a minor classic. But it's very clear from that essay that his identity was inextricably bound up with being a mathematician, and nothing else.

Late in life he attempted suicide, not just because of the general effects of failing health but also – and perhaps mainly – because his mathematical powers had deserted him.

Rather depressingly, he claimed (in his Apology) that most people don't have any significant talent for anything. But "[i]f a man has any genuine talent he should be ready to make almost any sacrifice in order to cultivate it to the full." Anyone, he asserted, who sets out to justify his existence and his activities has only one real defense. And that is to say, “I do what I do because it is the one and only thing that I can do at all well."

Why did he mention journalism, I wonder? It's particularly puzzling because journalism is so utterly different from mathematics generally – and especially from Hardy's style of doing and thinking about mathematics with its focus on timeless beauty.

This is in addition to the fact that mathematics is normally associated with the sciences. So, naïvely, I would expect a mathematician to say that, had he not pursued mathematics as a career, he might have become a scientist or engineer of some kind, for example.

But Hardy, though he was attracted to biology in his youth, exhibited in his adult life no great interest in or high regard for science, and he had a quite negative attitude to applied science. He prided himself on the fact (as he saw it) that his work had no practical applications.

And he disliked new technologies. He had a telephone installed in his house which he ostentatiously avoided using: it was for the use of any guests who fancied that kind of thing.


By journalism Hardy certainly didn't mean writing about scientific (or mathematical) subjects for a general audience. He meant, presumably, mainstream journalism. And my guess is that he was attracted to it for three basic reasons.

Firstly, he recognized that he had a second talent, a gift for writing – and writing with style and wit and conciseness. (He was famous amongst his friends for his postcards.)

Secondly, though scornful of politicians, he did have an interest in politics and was active in a pacifist organization, the Union of Democratic Control, during World War 1. Significantly, one of the leading and most impressive figures involved in this organization was the French-born journalist E.D. Morel.

And last but not least, I suspect that Hardy saw in the lifestyle associated with journalism (as in the academic lifestyle of the time) a kind of freedom which for a certain kind of person is not just desirable but necessary.

Saturday, June 1, 2013

Cultural innovation, genes, and the origin of language

Simon Fisher and Matt Ridley have argued, mainly on the basis of new DNA sequencing data, that cultural factors were far more significant in driving genetic changes in the evolutionary history of our species – such as those that led to the development of language – than was previously thought.

"The common assumption is that the emergence of behaviorally modern humans [sometime] after 200,000 years ago required – and followed – a specific biological change triggered by one or more genetic mutations."

But the "prevailing logic in the field may put the cart before the horse. The discovery of any genetic mutation that coincided with the 'human revolution' must take care to distinguish cause from effect. Supposedly momentous changes in our genome may sometimes be a consequence of cultural innovation. They may be products of culture-driven gene evolution."

Fisher and Ridley quote obvious, uncontroversial examples where culture has driven genetic change – like lactase persistence amongst dairy-farming communities, and alcohol-tolerance amongst Europeans (who generally drank more alcohol than Asians, for example).

The question of language origins is much more complex, of course, but there is mounting evidence – relating, for example, to variations in the FOXP2 gene in humans and other species – that cultural factors were the drivers of change.

FOXP2 is known to play an important role in human language abilities, but, in considering the roles of FOXP2 in human evolution, it is important to recognize that it has a deep evolutionary history.

"Animal studies indicate ancient conserved roles of this gene in patterning and plasticity of neural circuits, including those involved in integrating incoming sensory information with outgoing motor behaviors. The gene has been linked to acquisition of motor skills in mice and to auditory-guided learning of vocal repertoires in songbirds. Contributions of FOXP2 to human spoken language must have built on such ancestral functions.

"Indeed, further data from mouse models suggest that humanization of the FOXP2 protein may have altered the properties of some of the circuits in which it is expressed, perhaps those closely tied to movement sequencing and/or vocal learning.

"Given these findings, it seems unlikely that FOXP2 triggered the appearance of spoken language in a nonspeaking ancestor. It is more plausible that altered versions of this gene were able to spread through the populations in which they arose because the species was already using a communication system requiring high fidelity and high variety. If, for instance, humanized FOXP2 confers more sophisticated control of vocal sequences, this would most benefit an animal already capable of speech. Alternatively, the spread of the relevant changes may have had nothing to do with emergence of spoken language, but may have conferred selective advantages in another domain.

"FOXP2 is not the only gene associated with the human revolution. However, it illustrates that when an evolutionary mutation is identified as crucial to the human capacity for cumulative culture, this might be a consequence rather than a cause of cultural change. The smallest, most trivial new habit adopted by a hominid species could – if advantageous – have led to selection of genomic variations that sharpened that habit, be it cultural exchange, creativity, technological virtuosity, or heightened empathy.

"This viewpoint is in line with recent understanding of the human revolution as a gradual but accelerating process, in which features of behaviorally modern human beings came together piecemeal in Africa over many tens of thousands of years."

The accumulating evidence alluded to by Fisher and Ridley certainly makes Noam Chomsky's suggestion that language appeared all of a sudden and was the direct result of a genetic mutation look naïve and implausible.

But it also challenges the more mainsteam approaches still favored by many linguists who (influenced, like Chomsky, by traditional rationalism) see the human language faculty in absolute and ahistorical terms.

Descartes saw "la raison" [reason] as being "toute entière en un chacun" [entirely and equally present in each of us], and many linguists still see language in a similar – and strangely metaphysical – way.

Tuesday, May 21, 2013

The view from Nagel-land

I said I would follow up on Thomas Nagel's views. The first twenty minutes or so* of the video inserted below is a talk in which Nagel summarizes and critiques his friend Ronald Dworkin's view of morality. Nagel speaks (as he writes) with great clarity and seriousness. (I realize that many will find the content a bit dry, but there is interest also in the style of delivery, in the very manner of the man. Ivy League and very 20th century!)

Dworkin wants ethics to be objective, and has a clever argument which appears to demonstrate that moral claims can indeed be seen as objectively true or false even within the context of a naturalistic world view.

Nagel – correctly in my view – sees our current naturalistic world view (he refers specifically to evolutionary theory) as being "difficult to square with" the objectivity of moral claims. But, as he is not willing – for moral reasons, apparently – to give up on the objectivity of right and wrong, he rejects the current naturalistic world view.

This last move is a grievous mistake, in my opinion. He is saying, in effect, that it would be just too awful if right and wrong did not have an objective basis – and so they do have an objective basis, and the scientists must have got things seriously wrong.

I respect Nagel's honesty and directness. He goes with his moral intuitions, but I would say that they take him out of the secular mainstream.

Nagel's move in this talk, by the way, needs to be seen in the context of his long-standing insistence that science, which aspires to an objective 'view from nowhere', is incomplete for it cannot encompass or explain the reality and the realities of the first-person point of view.

This idea is associated with (because it can be used to justify) what I see as the main problem with Nagel's thinking: that he lacks, and shows little interest in, scientific knowledge.

This may not matter for certain kinds of intellectual enquiry, but scientific issues (especially relating to evolutionary biology and physics) are crucial to many of the questions Nagel addresses.

In fact, his obvious (and self-confessed) lack of knowledge in these areas makes it difficult to take his reflections on human psychology or human evolutionary or cosmic history (most recently expressed in his book Mind and Cosmos) seriously.

I don't want to posit a simplistic contrast between scientifically-trained thinkers and those with little or no scientific training, however, and suggest that only the former are worth listening to. The scientifically trained can be just as stupid and irrational as anyone else.

But it does seem reasonable to expect anyone dealing in a serious way with questions pertaining to a particular area of science to have a thorough grounding in that area, or at least in a related area of science.



* The most interesting bit, in my opinion, starts at the 14:20 mark.

Thursday, May 2, 2013

David Albert on science

Having previously wondered out loud about and attempted to speculate on David Albert's general perspective on science and religion, I thought I would let him speak for himself. Okay, it's just a YouTube video and it's a few years old, but Albert is impressive and direct and concise. (This is the man Lawrence Krauss called 'moronic'.)

There are allusions to a silly film Albert got involved in which pushes all sorts of New Agey ideas and which he is seeking to distance himself from. What is particularly interesting (given all the fuss about his reliance on Templeton funding and so on) is that, far from coming across as sympathetic to a religious view of the world, Albert suggests that science, which is revealing a hard and mechanistic reality quite at odds with human desires and expectations, constitutes our best hope of getting at the truth of things.


[If you're pressed for time, I suggest you come in at the 10 minute mark.]

Sunday, April 14, 2013

Descriptive and normative approaches to ethics

[This piece is a substantially-revised version of an earlier post, 'Ethics in a nutshell'.]

Meta-ethical questions are a bit like questions in the philosophy of mathematics, where various forms of platonism or realism do battle with more mundane interpretations. The key difference, I would suggest, is that the philosophy of mathematics has very little impact on the way mathematics is done, whereas meta-ethical disputes do impinge on normative (or prescriptive) ethics as practised by philosophers (though not, it must be said, on most ordinary ethical decision-making).

Unfortunately, meta-ethical disputes (which are often driven by deeply-felt convictions about the nature of human life and reality) are not readily resolvable, posing problems not only for meta-ethics but also for normative ethics.

Moral reasoning is complex and difficult enough, even if one is working – like many religious thinkers – within a generally accepted broader framework. But if there is no agreed-upon framework then conclusions are going to be – to say the least – very contestable.

And what of science? Science can, I believe, change the way we see the world in a way that philosophy can't. Though there is an important distinction to be maintained between the descriptive and the normative, between scientific and value-based judgements, science can undoubtedly offer new insights into value-based questions.

Our evolving understanding of the natural world and our place in it has a profound impact not only on how we see particular moral issues but also on how we frame and respond to general questions about human values, responsibility and freedom.

For example, 'ought' implies 'can', and the findings of science have a lot to say on what is realistically possible in terms of human behavior and what is not.

More generally, as our knowledge of human psychology has advanced, there have been – and there will continue to be – changes, both subtle and profound, in the way we think about right and wrong and conscience and guilt – and also changes to institutional mechanisms for dealing with anti-social and criminal behavior.

One approach to descriptive ethics which is not strictly scientific but which complements more rigorous approaches involves looking at how adjectives like 'ethical' and 'moral', auxiliaries like 'should' and nouns like 'obligation' or 'duty' are actually used in ordinary day-to-day contexts, and attempting to discern the implicit social rules and expectations which underlie the use of such expressions.

Every society, every social group, incorporates implicit rules of behavior. These rules (some relating to etiquette or manners, others to morality) can be studied and described like any other aspect of social life, though such descriptions will of necessity be incomplete and somewhat interpretive.

These systems of implicit moral rules coexist, of course, with explicit rules, as exemplified in systems of law and regulation. Though my focus here is on the former, it's important to be aware of the subtle, complex and often contentious relations between the two.

Just as the law is a system of enforceable explicit rules, so morality can be seen as a system of implicit rules. And just as the law outlines legal responsibilities and confers certain legal rights, so moral systems can be seen to assign responsibilities and confer certain moral rights. If you break society's explicit laws and are discovered, formal mechanisms of enforcement and justice are set in train. Similarly, if you break implicit moral rules, informal mechanisms (like disapproval and ostracism) will likely be triggered. The basic principle (hard-wired into our brains, perhaps) is that if you flout the rules you forfeit your right to the benefits and protections those rules might potentially provide.

Normative, as distinct from descriptive, approaches to ethics involve the individual actually becoming ethically engaged (rather than just describing what is). This will involve making or accepting or rejecting particular moral judgements or affirming or endorsing or arguing for particular judgements or values. It inevitably involves interpreting social rules, sometimes criticizing, and sometimes rejecting them.

Deontic logic traditionally divides behaviors into three broad classes: obligatory, impermissible and optional. ('Optional', by far the most appealing, is also, plausibly, by far the largest of the three classes.)

It's a complex branch of logic, but the real complications and challenges of practical moral thinking are not so much logical as contextual. Because, obviously, the general situation and the specific position(s) of the individual(s) involved need to be taken fully into account.

Times have changed since F.H. Bradley wrote his famous essay, 'My station and its duties' [a chapter, actually, of his book Ethical Studies (1876)], but the basic principle of the contextuality of ethics still applies. A person's duties or obligations derive in large part from (or at least cannot be assessed without taking into account) his or her positions in complex societal, professional and familial structures.

The key question in ethics is a situated-first-person question: what should I – in a particular situation at a particular time – do (or refrain from doing)? I say this is the key question in ethics, but such a question (and this is reflected in the ambiguity of the word 'should') often goes beyond ethics or morality, and merges with questions of prudence or etiquette or other areas or dimensions of life.

Unacceptable behavior causing serious harm to others, however, is clearly an issue in which ethical (and probably also legal) considerations will dominate.

What of ethical subjectivism? Is it a threat or a problem? My view is that, if normative ethics is seen as something theoretical, as an area of study to be compared and contrasted with descriptive (psychological or sociological) approaches, the former will inevitably suffer from the comparison, especially concerning claims to having an objective basis.

But if, on the other hand, normative ethics is seen in a more practical light, seen as an integral part of actually living and choosing rather than in purely academic or epistemic terms, then questions of objectivity versus subjectivity may not even arise.

The fact is, we are all forced to make ethical and other value-based decisions all the time. And, while empirical knowledge, reason and rational discourse can play an important part in these decisions, other more obscure elements are also inevitably in play.

Saturday, March 30, 2013

Pale, small, silly and nerdy

Thomas Nagel, whose atheistic rationalism has transmuted itself into a view of the world which is looking increasingly religious, is one of quite a number of prominent secular thinkers who have moved in this general direction in recent years. What's going on here, I wonder?

Well, I'm not optimistic about coming up with an answer, or even throwing much light on individual instances (I was going to write 'cases', but I don't want to imply a pathological cause!). This apparent trend puzzles me, and I intend to do a post or two on this general topic and/or on individual thinkers like Nagel or Hilary Putnam (who celebrated his Bar Mitzvah at age 68) in the future.

David Albert is another figure I intend to look at. Albert has very interesting views on quantum mechanics, but I haven't yet ascertained where he stands on religious or broadly metaphysical questions.

As I have pointed out elsewhere, Albert's dismissive review of Lawrence Krauss's book, A Universe from Nothing, was the catalyst which sparked a series of accusations and counter-accusations culminating in Albert's longstanding invitation to join a discussion panel for a high-profile event at The American Museum of Natural History being withdrawn.

Having argued that Krauss's idea of nothing (relativistic-quantum-field-theoretical vacuum states) was not nothing but arrangements of elemental physical stuff, Albert demonstrated his deep frustration with what he sees as Krauss's facile dismissal of religion in these two (rhetorically effective, at least) final sentences:

"When I was growing up, where I was growing up, there was a critique of religion according to which religion was cruel, and a lie, and a mechanism of enslavement, and something full of loathing and contempt for everything essentially human. Maybe that was true and maybe it wasn’t, but it had to do with important things – it had to do, that is, with history, and with suffering, and with the hope of a better world – and it seems like a pity, and more than a pity, and worse than a pity, with all that in the back of one’s head, to think that all that gets offered to us now, by guys like these, in books like this, is the pale, small, silly, nerdy accusation that religion is, I don't know, dumb."

I feel the force of what Albert is saying here. Religion has played a major role in our social, cultural and political history, and has in many ways been the matrix out of which the hopes and dreams and expectations of all civilizations have developed.

He is implying that people like Krauss lack both a knowledge and an appreciation of this dimension of human life and experience.

And maybe they do.

But the question still remains concerning the plausibility of religious claims in the light of our current scientific knowledge.

Even if many great and important elements of our civilization arose directly from religious traditions or in a religious context, it can still be asked whether religion is in any real sense still credible.

And for religion to be credible, its doctrines (or assumptions) must be credible.

The doctrines of Christianity are simply not credible, in my opinion. Nor the Jewish notion that an all-powerful, universal God favored and guided and protected a particular human population. Nor do Plato's mythic speculations stand up to modern scrutiny.

There is much more to be said on these issues, of course, but, if I had to say here and now where I stand after a good deal of thought and consideration over the years, I would have to come down on the side of those who feel that religion is, frankly, of the past; that it no longer has anything of value to offer.*

David Albert may well have been justified in criticizing Krauss for claiming that modern physics has satisfactorily explained why there is something rather than nothing, but those (now notorious) final two sentences, if anything, weaken his critique by suggesting that his outlook may have been unduly motivated by pre-existing attachments, by emotional factors in effect.

As I indicated above, I am currently in the process of trying to figure out where David Albert stands on questions of religion.



* I am aware that the words 'religion' and 'religious' can be used in a broader sense to encompass, not just more or less clearly defined traditions, but ways of feeling and thinking which might pick up on certain religious themes or attitudes or points of view – a general sense of providence, for example, along the lines of Julian of Norwich's "all shall be well" but without the trappings of specifically Christian belief. Religious thinking in this sense cannot be so readily dismissed.

Tuesday, March 5, 2013

Philistines and epigones

In my previous post I referred to Colin McGinn's suggestion in an essay published last year by the New York Times that academic philosophers should be designated as 'onticists' and their discipline as 'ontics' or 'ontic science', linking this strange episode to my concerns about the status, viability and worthwhileness of philosophy.

Masochist that I am, I have reread both McGinn's original piece and his reply to his critics, and I'd like to make a few comments on my understanding of his point of view.

Essentially, McGinn helps me make my case that philosophy – however it is designated – is no longer a viable discipline.

The first point to make is that his view of philosophy (and its continuing relevance and value) derives from underlying assumptions, some of which are not made explicit.

I will leave aside his ideological (ethical and political) commitments, though they are arguably relevant (at least indirectly) to his view of philosophy. I may do a piece at Conservative Tendency some time on these matters. For now, I will just mention that, influenced by Peter Singer, McGinn appears to have radical views about vegetarianism and animal rights.

Although he is an atheist, McGinn is well-known for defending a mysterian position; that is, he believes that traditional philosophical problems like consciousness and free will are real problems which no scientific developments will solve. They are, and will remain, specifically philosophical problems (and probably unsolvable due to the limitations of our brains).

In the original piece, he describes philosophy (or ontics) as having as its primary concern "the general nature of being". He quotes a dictionary definition of philosophy as "the study of the fundamental nature of reality, knowledge and existence".

He continues, saying that we can simplify [hah!] this definition "by observing that all three cited areas are types of being: objective reality obviously is, but so is knowledge, and so also are meaning, consciousness, value and proof, for example. These are simply things that are."

"So," he concludes, "we study the fundamental nature of what is – being."

My response is to question the coherence and worthwhileness of this project. While reflective and speculative thinking which is tied closely to a specific discipline, and which grows naturally out of research findings in that discipline, is truly important and often vital for future epistemic progress, the vague and general and medieval-sounding notions put forward by McGinn are confused, unconvincing, hollow and self-serving.

I note also that, as part of his rhetorical pitch, McGinn is assuming the high cultural ground in accusing scientists who lack an interest in these matters of philistinism.

Which is rather ironical when you consider some of his name-change ruminations, which sound philistine or worse to me. 'Ontics' is his preferred choice, but McGinn also mentions some possible alternatives, including (can you believe it?) 'beology', 'beological science' and 'beotics' (all presumably based on the verb 'to be'). I find it hard to believe that a cultured, intelligent and highly educated man could even have half-suggested anything so utterly stupid and childish as this. No wonder philosophers are losing respect.

In his second essay on the topic – his reply to his critics – McGinn confirms that he is defending a more or less traditional view of philosophy, with metaphysics at its core. He writes: "My conception of philosophy is broadly Aristotelian: the subject consists of the search for the essences of things by means of a priori methods... The things whose essential nature is sought range from space, time and matter, to necessity, causation and laws, to consciousness, free will and perception, to truth, goodness and beauty."

McGinn may well be an anti-theist, but I perceive here an anti-scientific perspective also, and perhaps even, in some sense, a religious one. It is worth noting in this context that McGinn was influenced by the rationalist philosopher, Thomas Nagel, whose long-standing anti-physicalism seems to be evolving into an anti-scientific if not religious stance. [I may have a closer look at Nagel, and other philosophers who have moved in a similar direction (such as Saul Kripke, who also influenced McGinn), in a future post or posts.]

I have said in the past that I see philosophy as being essentially parasitic on religion – in the sense that it only thrives in an intellectual environment in which religious (or similar) views also thrive, whereas a physicalist outlook (which I would suggest most educated people take for granted these days) has no need – and no place – for philosophy.

I don't want to get involved here in defining exactly what I mean by physicalism, and certainly not in mounting a defense of the position. In fact, my argument here is not that physicalism is true; rather, I am arguing merely that a commitment to physicalism is not conducive to having a high regard for philosophy, whereas having a (in some sense) religious view of the world is.

For the purposes of my argument, physicalism might be understood simply as an updated version of good old-fashioned materialism – which is no longer viable because matter is no longer seen by physicists as the fundamental stuff the universe is made of.

Physicalists look to physics and the other sciences for our best understanding of the universe and ourselves (who constitute, of course, a small but possibly quite important part of that universe).

They reject (or see no reason to accept) religious ideas; likewise any notion of a spiritual realm.

And they generally defend their beliefs by referring to empirical evidence.

Mathematics is the only area, in my opinion, which has plausible claims to constitute an area of non-empirical knowledge.

But McGinn's talk about philosophy's a priori approach takes us far beyond the constrained and disciplined methods of mathematics; back, in fact, to a pre-modern view of the world. Indeed, as we have seen, he even compares his approach to Aristotle's.

Which, in my opinion, does Aristotle – who was a great thinker with naturalistic tendencies, a proto-scientist in fact – a grave injustice.

If Aristotle knew the science that we know, he would not be Aristotle. And if Aristotle were transported to our time, I have no doubt that he would be far more interested in talking to biologists and physicists than to philosophers.

In fact, I can readily imagine him, with his aristocratic background and passion for understanding living creatures, getting on rather well with the almost aristocratic Richard Dawkins.

On the other hand, he would be very likely to give short shrift to the putatively Aristotelian Colin McGinn and his unscientific philosophical friends.

Friday, February 22, 2013

The fading of philosophy

Last year, the well-known and respected philosopher Colin McGinn suggested that we replace the term 'philosophy' with 'ontics' to designate the area that academic philosophers – or 'onticists' – are engaged in. But, in my view, philosophy's problems are deep and long-term, and such talk of rebranding the discipline on the part of a leading practitioner only serves to underscore its parlous state.

My suggestion is not that we replace the term, but that we de-emphasize it, recognizing that it has gradually lost any substantive meaning as the designation for a stand-alone discipline.

When, in the early 20th century, metaphysics in general and philosophical idealism in particular fell out of favor, philosophy itself (of which metaphysics had traditionally been seen as the core) began to suffer a crisis of confidence and a fall in status. This trend was exacerbated as psychology and other social sciences established their scientific credentials, and severed their links with philosophy.

It's no wonder that the very existence of philosophy as an academic discipline began to be called into question, and, in the post-World War 2 period, it was seen by a significant number of its practitioners as being limited to serving a clarificatory function with respect to the sciences. It was seen as a handmaid to science in a similar way that, in the medieval period, it had been seen as a handmaid to theology.

Since the post-War period, things have gotten more complicated. There have been attempts to revive a more traditional and general role for philosophy, but, in my view, these attempts are fundamentally ill-conceived.

I would be happy to explain my case in detail to anyone who might be interested (a small and shrinking section of the population, I suspect), but for now will just say that I don't see whence such a discipline derives its authority.

Science derives its authority from established procedures generally referred to as 'the scientific method' which ensure that, to a large degree at any rate, human bias is filtered out: scientific findings are objective, and so authoritative. Mathematics incorporates similar procedures which allow results to be objectively assessed.

But nothing like this happens in philosophy. And, consequently, there is no convergence of opinion and slow building of a body of genuine knowledge such as occurs in the sciences.

This is not to say that science can play the role that religion played, that scientific knowledge can provide certitude, solace and a sense of purpose. It can't.

Nor can 'philosophy' of course. No discipline can.

There is – and always will be – a place, however, for reflective thinking. My point is simply that it doesn't constitute – and nor is it encompassed by – a single academic field or discipline.

Meta-thinking will always occur within and about the various physical, social, and historical sciences, and other disciplines, such as mathematics and logic. Such thinking can still be usefully described as the philosophy of physics, of biology, of history, of logic or whatever.

But reflective – and value-based – thinking about human life in general is more problematic. We all do it, but it does not – it cannot – be systematized. I have written before about my reservations about philosophical ethics, which I don't see as a viable area of research, or as something that constitutes a distinct and authoritative intellectual discipline.

What about reasoning in general (considered from a normative and pedagogical perspective)? Isn't this an area in which philosophers can claim expertise? Perhaps.

I do see a need to develop and promote logical thinking and skills related to reasoning and argument. But, though this is an area which philosophers are claiming as their own, I don't see it as necessarily being connected to philosophy at all.

In a recent discussion on a comment thread at Rationally Speaking, Paul M. Paolini argued (against me) that in areas beyond the scope of science and mathematics (like normative ethics) we have a stark choice between philosophy on the one hand and dogmatism or chance on the other. I made the point in response that defining philosophy in terms of rationality was inappropriate. Rationality is clearly a broader concept than philosophy, and there are many non-philosophical ways of dealing with general questions of human life and value which are compatible with reason.

In fact, ordinary educated discussion and deliberation may allow more scope for valuable, experience-based intuitive insight than the often stale, flat and superficially complex reasoning style of professional philosophers.

The careful use of language and sophisticated reasoning skills have traditionally been taught, directly and indirectly, in a variety of academic and intellectual contexts and subject areas. No knowledge of philosophy or philosophical logic is required to think and reason and argue well (which is not to say that a knowledge of informal and formal logic may not be useful).

On the question of logic, it is well to bear in mind that the crisp and complex grammars of ordinary human languages encapsulate the fundamental logical principles upon which formal logical systems are built. And ordinary language use is predicated on an understanding of these principles.

Philosophy and philosophers don't have a monopoly on reasoning or the teaching of reasoning, in other words, even if, historically, much cataloguing of logical fallacies and so on was done by philosophers.

Modern approaches will, of course, incorporate (but need not be restricted to) the findings of research in various branches of psychology concerning our brains' inbuilt biases of perception and reasoning.

Regarding the broader, more ambitious questions with which philosophy has traditionally dealt, we need to recognize that many of these questions are unanswerable, either because they are conceptually confused or because they are beyond the scope of current science (and perhaps also beyond our intellectual capacity).

It helps however – at least in my experience – to cultivate a sensitivity to, and a critical interest in, the fundamental media of our understanding: above all, ordinary language and its implicit logic; but also, if possible, mathematics and other formal systems.

The goal, as I see it, is clarity with respect to what we know, coupled with an awareness of the vastness of our ignorance and the need to remain alert to the possibility of new – and unexpected – perspectives.

Friday, January 18, 2013

Ethics in a nutshell

[Note: I am no longer happy with this, and intend to post a revised version soon. April 4]

Ethics and morality are important topics, but much ethical discussion and debate is unenlightening and unproductive.

I have serious reservations about philosophical ethics. Whilst a knowledge of some of the rudiments of ethical theory may be useful for articulating issues and problems, there is no clear way of solving problems or deciding between alternative approaches. The academic study of ethics soon becomes (in my experience) an area of rapidly diminishing returns.

Different people have very different ideas about the scope and nature of ethics, often talking at cross purposes or seeking to promote a cherished agenda by any means, including personal abuse.*

Rather than elaborating ambitious theories or contributing to the revival of Aristotelian or other classical approaches, I am drawn simply to look at how adjectives like 'ethical' and 'moral', auxiliaries like 'should' and nouns like 'obligation' or 'duty' are actually used in ordinary day-to-day contexts, and the implicit social rules with which such expressions are associated.

Every society, every social group incorporates implicit rules of behavior. These rules (some relating to etiquette or manners, others to morality) can be studied and described like any other aspect of social life.

Prescriptive (as distinct from descriptive) approaches involve the individual actually making or accepting or rejecting moral judgements or using or applying moral language or concepts.

Deontic logic traditionally divides behaviors into three broad classes: obligatory, impermissible and optional. It's a complex branch of logic, but the real complications and challenges of moral thinking are not so much logical as contextual. Because, obviously, the general situation and the specific position(s) of the individual(s) involved need to be taken into account.

Times have changed since F.H. Bradley wrote his famous essay, 'My station and its duties' [included in his Ethical Studies (1876)], but the basic principle of the contextuality of ethics still applies. A person's duties or obligations derive in large part from (or at least cannot be assessed without taking into account) his or her positions in complex societal, professional and familial structures.

Kant talked about a categorical imperative, but I don't think we can get beyond hypothetical imperatives. In other words, if you (in such and such a situation) want such and such an outcome, do this or that. With respect to social relations, this way of thinking is never straightforward or foolproof, and requires judgement and insight to be applied successfully.

The kind of (implicit) rule-based approach to ethical thinking and manners which I am advocating is consistent with a very modest view of rights. If you break society's implicit rules whenever it suits you, you forfeit your right to the benefits and protections those rules might potentially provide.

The key question in ethics is a first-person question: what should I do (or refrain from doing)? I say this is the key question in ethics, but such a question (and this is reflected in the ambiguity of the word 'should') transcends ethics or morality.

Ethical or moral questions often merge into questions of etiquette, aesthetics and prudence as well as other areas or dimensions of life. There are no clearcut divisions between ethical and other considerations, in other words, and a certain type of (marginally unacceptable) behavior may be condemned by some as immoral, while others might prefer to call it ugly, unwise or just bad form. Others may see it in a positive light.

Even very serious moral transgressions (like the indiscriminate killing of civilians) are sometimes seen by people in the grip of certain ideologies or belief-systems as praiseworthy.

Most of us, of course, will condemn such ideologies as noxious and depraved. I certainly do. It is not really a problem that we can't prove our view correct and its converse incorrect in some objective, theoretical sense (though many think it is). Ethics is just not like that.

Quite simply, there is no absolute or objective ethical authority, and nor is there any objective method of determining 'moral truths'.



* Here is a summary of a recent controversy involving some very silly and intemperate assertions on the part of one of the protagonists.