Monday, December 23, 2013

The phantom self

Set out below is the core section of Gordon Cornwall's analysis of the 'phantom self' (taken from the post to which I provided a link in my previous post on this site).

But first, my brief critique.

I do go along with Cornwall (and with Derek Parfit) to the extent that they deny the existence of any substantive self. What exists are bodies which are, at a basic level, conscious of their existence as (mortal) bodies and, at a more complex (and problematic) level, subject to the illusion of a (potentially independent) immaterial self.

Planning and thinking about the future need not involve these problematic beliefs in any essential way, it seems to me. And imagining possible threats to one's well-being (and the well-being of loved ones) – which of course lies at the basis of intelligent behavior and planning – needn't lead to neurosis or anything like it.

It is true, however, that our awareness of our own mortality does, at a fundamental level, cast a long shadow and put a dampener on joy and real constraints on human happiness.

Parfit's statement (cited by Cornwall) that "ordinary survival is about as bad as being destroyed and having a replica" may be playful. But it seems to me only to make sense if you deny the existence not only of a substantive self but also of the sense of a specific self which a body generates as it 'survives' from minute to minute and from day to day.

This specific-body-generated first-person point of view is what we are, and we would prefer (under most circumstances) that it continue. I don't see how having a surviving 'copy' would allow that to happen.

Finally, Cornwall seems to misunderstand the distinction between the public, objective stance of science and the first-person perspective – which encompasses all of what he calls 'practice' as well as our subjective understanding (even when the latter is informed by science).

I just don't see any serious problems with a straightforward physicalism, at least as it pertains to the scientific understanding of the relationship between the body and the sense of self.


Cornwall writes:

"Belief in the special, separate unity of the self comes naturally to humans. It is the result of a trick of natural selection. Having a self-model is an adaptive feature of complex animals that are capable of moving around. The self-models of such animals are tightly coupled to their motivational systems, which include their emotional systems. The appearance of an immediate threat to self triggers a strong emotional response in most animals, activating the amygdala and launching a flood of psychosomatic and behavioural responses which tend to help them survive the crisis.

Humans are unlike most other animals in that, with our highly developed prefrontal cortices, we are capable of imagining and making detailed plans for the future. As part of imagining the future, we imagine ourselves in the future. Visualizing a threat to oneself in the future triggers an emotional, motivational response similar to that which would occur if the threat were actually happening on the present scene. The response is enabled by strong projections from the prefrontal cortex to the amygdala and associated limbic regions of the brain. The ability to label an imagined entity as ‘self,’ and have it trigger this kind of emotional response, is an adaptation that, perhaps more than any other, propelled our species into our present position of earthly dominance. Unfortunately, this adaptation [...] came at a considerable cost in unnecessary suffering. It is an effective design, but not a very good one. It is far from optimal, and certainly not elegant.

One way to view this idea is as another outgrowth of the scientific physicalism that has illuminated so much else. Looking at what we have learned in the past few hundred years, it is hard not to be impressed by scientific physicalism as the source of our most far-reaching and productive changes in outlook. Out of it came the demise of geocentrism. When the direction 'down' was displaced as a fundamental orientation of the universe, so our parochial planet was displaced as its centre. Ceding centre stage is always salutary; it resulted in a widening of horizons, a deeper engagement with extraterrestrial reality.

Scientific physicalism was also Darwin’s mindset. We no longer see ourselves as the pinnacle of creation, but as blood relatives of all other species on this planet, an extended family of creative solutions to the problem of life. They reflect us in countless ways, and we will learn from them for a long time to come. Understanding natural selection, we come to know that we are not the product of a perfect design process. We are beginning to see opportunities to improve on our own nature.

The productivity of scientific physicalism stems from its ontological parsimony. Science does not assume the existence of entities that are not needed to explain observations. Physicalists saw the opportunity to dispense with a fixed framework of space-time in which all objects had a position and velocity. There is no such framework; hence the insights of relativity. Physicalists do not need to assume the existence of God, either. What most people don’t quite realize yet is that the selves they imagine themselves to be can also be dropped from the scientific ontology, with a resulting gain, not loss, in its explanatory power. If you simply look at what is, then Parfit’s famous statement that "[o]rdinary survival is about as bad as being destroyed and having a replica" gains the presumption of truth, for there is no evidence for the existence of anything so mysterious as its negation implies. I should point out that Parfit’s characterization of ordinary survival as ‘bad’ is playful; this insight into what survival amounts to is all to the good. To embrace it is to escape the glass tunnel and engage with life on a broader scale and a longer time dimension, one that extends long after one’s biological death.

One more thing. My approach to this subject has been, and remains, one of intellectual discovery. I’ve always been more interested in learning the truth than in changing myself. Advocates of ‘spiritual practice’ sometimes tell me I’m doomed to failure; the truth cannot be grasped intellectually. Respectfully, I think the jury is out on that. Western philosophers in the analytical tradition have justly been criticized for mistaking their own failures of imagination for metaphysical necessity. So, too, past failures to intellectually grasp religious insights into ‘no-self’ should not be taken as proof that all such attempts in future will also fail. Scientific progress has achieved much, and will achieve much more. I don’t know of any convincing argument that science cannot leap this hurdle."

Thursday, December 12, 2013

The glass tunnel

Adrian McKinty is to blame. He started a discussion on Derek Parfit's perennially frustrating ideas on personal identity and death. You will see that I reiterated my previously-stated views* (which are similar to Adrian's own) in the course of an exchange on the comment thread.

And now I have stumbled across Gordon Cornwall's sophisticated analysis which defends Parfit's view and so implicitly challenges mine.

My intention, then, is to revisit the very important questions that lie behind these discussions, initially by reading and thinking about what Gordon Cornwall has to say. I can't reject it just because it has a mystical or religious feel which I don't like and which makes me suspicious (just as Parfit's approach does).

But first let me make a few general comments on my attitude to Derek Parfit as well as trying to set out the emotional context of my thinking on these matters.

When I first encountered Parfit's 1984 book, Reasons and Persons, I remember concluding that his view seemed inconsistent with planning and caring about one's future, with prudence basically. But Parfit himself seems to have made it into his eighth decade without any trouble – and (if his claims are to be believed) with less stress than would have been encountered had he retained his earlier, more conventional view of human identity.

My main concern, however, is not to decide which view is more conducive to longevity or quality of life, but rather to figure out which view gives the truer picture of our individual selves.

Parfit experienced his change of viewpoint on personal identity from a conventional view to one which did not privilege the future over the past – and which downplayed the centrality and perhaps even the reality of his very existence as a self – as liberating.

Previously, he had, as he put it,

"... seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of the glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others." [Reasons and Persons, p. 281]

This talk about caring for others (especially from a son of medical missionaries) makes me wary. Is Parfit merely adopting (the broad outlines of) an essentially religious outlook and rationalizing it in philosophical terms?

But before turning (in a subsequent post) to examine alternative views more closely, let me set out briefly the broad outlines and emotional drivers of my current position.

My view could be seen to be based on a narrower view than Parfit's, and aspires to an almost animal-like simplicity. ('Almost' because animals don't worry about the future – or foresee their own inevitable deaths.)

Though I doubt that my self has any substantive reality (and to this extent I may have more in common with Parfit than I am assuming here), I know that whatever reality it has is entirely dependent on the continuing existence and proper functioning of this body. Oversimplifying: I am my body.

The tragedy is, of course, that this body, like all bodies, will fail in the end. This is just how things are. Life is tragic (and comic and pathetic), and not at all bathed in sweetness and light as some religiously-inclined people are inclined to see it. From my perspective, at any rate, it seems more honorable – and more honest – to interpret life in pessimistic and uncompromising terms.

This need not entail an entirely non-religious outlook (think of Miguel de Unamuno, for example), though my approach is non-religious.

An anecdote might help explain some of my values and attitudes. Some years ago my mother had very bad pneumonia and spent a number of truly terrible weeks in an intensive care unit: close to death, hooked up to a daunting array of machines and unable to speak (because of a tracheotomy). The family was called in for a meeting with the senior doctors and nurses: they were clearly expecting her to die.

In the ICU, there was a 1:1 ratio of nurses to patients, each nurse on duty assigned to one patient only, and we visiting family members got to know some of the nurses quite well. I don't remember much of what was talked about, but I clearly remember one of them commenting that she preferred dealing with (and liked) patients who fought against death. And my mother decidedly was (and still is) such a fighter.

On more than one occasion when I came to sit by her bed when she was at her lowest ebb and hooked up to all those tubes and machines she turned and appeared to attempt to climb over the bed rails towards me. When I first witnessed this, it took a few moments to realize what she was trying to do. It was at once grotesque and sublime – and extremely moving.

I don't want to make too much of this and suggest that those who "rage against the dying of the light" are right and those who opt for more dignified options are wrong. And I fully realize that of course a nurse – especially one specializing in critical care – is going to prefer patients who don't die on her.

But speaking personally, though I admire those who decide to end their own lives when the signs are that those lives have reached a certain level of completeness, I am rather less keen on going (when the time approaches) with dignity and rather more keen on hanging around for as long as possible.


Now, having aired my general thoughts and feelings on the matter, I will try to put them out of my mind and examine what Gordon Cornwall has to say (see link above) with an open mind.



* See, for example, this post.

Thursday, November 21, 2013

Science and self-effacement

Famously – or perhaps notoriously – Steven Jay Gould proposed that science and religion constituted non-overlapping magisteria. In my opinion, his claim was not plausible; but a similar claim regarding the sciences and the arts does stand up.

I want to focus here on the issues of self-expression and collaboration.

Individual and creative thinking plays an important role in science, but it involves a form of creativity which is far removed from the sort of creativity which applies in the arts. The latter is always associated with self-expression; whereas self-expression has no role to play in science.

So self-expression can be seen not only as a key demarcation criterion between the arts and the sciences but also as an indicator that these pursuits are opposites, incompatible, non-overlapping. It is a crucial part of the one, and plays no part in the other.

Collaboration, on the other hand, occurs in both the arts and the sciences. But it is an essential – and defining – feature only of the latter.

The vast majority of the greatest works of literature, music and the visual arts are attributable essentially to one man or woman. The artist draws, of course, on his or her teachers and the broader culture but in a real sense owns – as author or creator – the finished product.

Similar notions can apply even to necessarily collaborative arts like the cinema. Think of the director, Alfred Hitchcock. The best of the early films he made in England have the same winning combination of suspense, latent eroticism and humor as his American masterpieces even though he was working with entirely different people in a very different cultural context.

The arts are by their nature self-expressive, even if the expression is often, as in theatre, cinema, etc., group-based or, as in much medieval art for example, anonymous. But even in these cases, I would argue, the greater works will be more likely to bear the stamp of an individual genius or personality.

Science is just not like that. It is the antithesis of self-expression, and is all about building a common body of knowledge. To the extent that the individual's ideas are deemed to be important, to that extent the science is undeveloped and uncertain. As a science matures all traces of pioneering individual contributions are erased or at least merged into a greater, more complex and more subtle body of knowledge than any single mind could even begin to comprehend.

There was an interesting exchange a while ago on a comment thread at Rationally Speaking about the nature and the scope of science which has a bearing on this point. A German botanist working in Australia was arguing that science is concerned with everything empirical and is defined primarily in terms of its communal nature.

"... [I]t is not science if I personally figure out whether Craspedia species are apomictic. I have to share this information in a way that allows other humans to test it, reproduce it, and build on it, because science is a community effort. But then it would be science no matter how trivial the fact."

Though not everyone will see the collaborative side of science as a key defining feature – another commenter calls it "unusual" as a demarcation criterion – science has, in my opinion, an essentially communal, individual self-erasing nature. (It imposes self-effacement, as it were.)

This criterion also fits mathematics. You get untutored geniuses (like Ramanujan) but it's only when they are integrated into the mathematical community (as Ramanujan was, thanks to G.H. Hardy) that they become real mathematicians.

Thursday, October 24, 2013

Myths with pretensions

I commented recently – in the context of a post about myths relating to race and (Jewish) identity – that one of the things I like about science is its myth-destroying power.

And science (broadly construed to take in the historical sciences) certainly does have that power. But it is – I readily admit – a strangely disturbing power. It goes against the grain of human psychology and culture which is irredeemably myth-ridden.

So when I said I 'like' that aspect of science, I was oversimplifying – leaving out the sense of ambivalence.

Let me give an illustration based on the final couple of years of my religious phase which relates not just to myths but to metaphysicalized myths – myths, if you like, with pretensions.

Two kinds of thinker appealed to me, but each in a different way.

On the one hand were those who distilled the essence (as they saw it) of the Christian myth and offered a deeply satisfying (for those who could accept it) way of relating to the challenges of life which incorporated a very deep, intuitive but historically-validated understanding of human psychology. For me, these thinkers were largely those in the Protestant (and particularly the Reformed) tradition who embraced Paul's emphasis on the absolute power of God. My favourite was Karl Barth.

But I was also attracted to a completely different kind of scholar – more scientifically- and historically-oriented – who offered none of that psychological comfort, but who offered another kind of liberation entirely. Rudolf Bultmann spoke of demythologizing the faith, but what he was doing was simply reinterpreting the old myths. More convincing were those who didn't talk about faith at all but who sought merely to elucidate the historical background of the New Testament. And the more I understood that background, the less plausible the Christian interpretations (and myths) came to seem.

But when one gets rid of one myth another will often arise to take its place. Social and political myths, for example, often take the place of religious ones.

Our brains have a special affinity for simple narratives which is explicable no doubt – at least in part – in terms of the need to generate the sense of a coherent, continuing self. We also have a strong tendency (which manifests itself in the grammars of natural languages) towards animism – seeing even inanimate nature as exhibiting human-like intentions and purposes. And though we have come to accept science's non-teleological explanations, many still shy away from these as ultimate explanations. For example, there is resistance to the view that randomness is, as modern physics suggests, a fundamental characteristic of reality.

In a sense, science is – and always will be – an unnatural activity, and the scientific worldview is a peculiarly unsatisfying one. Those of us who are committed to a scientific view of the world will always be, I fear, to some extent at war with our own natures.

I have also been thinking about mathematical Platonism again recently. Though such a view is essentially a timeless one (and so renounces narratives in the normal sense), it may still be seen to incorporate elements of myth (and teleology) as well as metaphysics. How else would it manage to exert such a strong emotional attraction (as it clearly does for many)?

The question of the plausibility of mathematical Platonism (or realism) is so important because it impinges on broader questions, such as the viability of an empiricist worldview. In fact, mathematical realism can be seen – and is seen by many – as posing major challenges not only for empiricism but also for physicalism.

And, as my instincts are (for want of a less abstract way of characterizing them) deeply empiricist and physicalist, I need to settle on a particular view of mathematics and see if, or to what extent, I will be forced to modify the basic way I see the world.

I am quite resigned to the fact that I will never entirely escape mythical thinking, but my goal is – if possible – to rid myself completely of the grand, intellectualized and metaphysicalized kind of myth and settle instead for the humble and commonplace variety.

Like the perennially-appealing prospect, mooted in a famous section of Homer's Odyssey and revived in the 18th and 19th centuries, of retiring to an exotic island paradise and drifting extremely slowly into a peaceful and uncomplicated old age.

Monday, September 23, 2013

Anti-metaphysical musings

I have been looking recently at some material relating to "the metaphysics wars", and thought it worthwhile to jot down a few notes.

No doubt, my general position would be characterized by those with other views as scientistic. It is also anti-metaphysical in that I don't see the traditional philosophical discipline of metaphysics as having much point these days.

I don't deny that there are very interesting questions in the philosophy of physics, the philosophy of mathematics and the philosophy of logic which may be characterized as metaphysical. The meta-thinking that goes on at the margins of physics, other sciences and mathematics, etc. is necessary and valuable.

But somehow, when such thinking moves away from the discipline in question and becomes more generally philosophical, problems arise.

Timothy Williamson is perhaps the most powerful and impressive advocate for this broader kind of metaphysics (and analytic philosophy generally). As an avowedly non-religious person, he can't be dismissed as having ulterior motives of a religious nature; and, being at home with formal – and specifically modal – logic, he can't be dismissed as natural language-bound or as being daunted in any way by technical rigor.

Some of the points he makes in this interview are good ones – such as noting the light that modal logic can undoubtedly throw on the workings and nature of natural language (via Montague grammar, for example), and perhaps also on the foundations of set theory – but I have to say that I am strongly inclined to reject the basic thrust of his argument in defense of metaphysics, and, by extension, philosophy (as he understands it).

Essentially the questions he seems most interested in are reminiscent of medieval scholasticism. I too have great respect for thinkers such as Avicenna (to whom he refers approvingly) and respect also for more recent – and more mathematically sophisticated – exponents of that general tradition of thought (such as Bolzano, to whom he also refers), but it seems to me that it is now incumbent upon any thinkers who aspire to deal with questions of what there is in a fundamental sense to base their accounts – at least in large part – on contemporary physics; or on mathematics if they are restricting their focus to mathematical realities.

Williamson seeks to defend the relative independence of his core preoccupations from science by invoking the old shibboleths, scientism and reductionism, and rejecting naturalism as a confused and inadequate concept.

I grant that mathematics does pose problems for advocates of strong forms of naturalism and empiricism, and there are real unresolved issues in the philosophy of mathematics. But my preference is to address these issues in a broadly scientific and mathematical context rather than in a purely logical or philosophical one, or – worse – not to address them at all and instead merely to use them as a kind of justification or license for logical excess and metaphysical self-indulgence.

Williamson cites Quine as an example of scientistic naturalism.

"Quine privileged natural science, and in particular physics, over all other forms of inquiry, to the point of not taking very seriously any theory that couldn't be reduced to part of natural science."

Williamson's view, by contrast, more or less allows the analytic metaphysician carte blanche, and Williamson's own approach to analytic metaphysics is clearly – in my view at any rate – insufficiently constrained and guided by science.

Here, for example, is an extract from an old interview in which he explains his developing views:

"My work on vagueness and ontology doesn’t really concern ontology. Probably my most distinctive ontological commitment comes from my defence of a controversial principle in logic known as the Barcan formula, named after the American logician Ruth Barcan Marcus, who first stated it. An application of this principle is that since Marilyn Monroe and John F. Kennedy could have had a child (although they actually didn’t), there is something that could have been a child of Marilyn Monroe and John F. Kennedy. On my view, it is neither a child nor a collection of atoms, but rather something that merely could have been a child, made out of atoms, but actually has no location in space and time. The argument can be multiplied, so there actually are infinitely many things that could have been located in space and time but aren’t. It takes quite a bit of work to show that the Barcan formula is better than the alternatives! That’s what my next book will be on. The working title is Ontological Rigidity."

The book was actually called Modal Logic as Metaphysics, and this is how he recently stated its main point:

"I am ... saying that it is necessary what there is. Necessarily everything is necessarily something. There could not have been more or fewer things than there actually are, and which particular things there are could not have been different. What is contingent is only what properties those things have, and what relations they have to each other. I call that view necessitism. Its denial is contingentism. Who knows how far back necessitism goes? Maybe Parmenides was some sort of necessitist..."

On the face of it, talking about (apparently countable) things (minus their properties and relations!) as given strikes me as breathtakingly naïve in the context of a physics-based understanding of reality. I can only imagine that Williamson is – like the medieval scholastics – implicitly asserting a privileged role for logic.

Quine's assertion of a privileged role for physics makes a lot more sense to me.

Admittedly I haven't looked at Williamson's ideas in any depth, but what I have seen so far – and what he says in this latest interview – really makes me question whether it would be worth the effort. I am intrigued, however, by what is driving such thinkers.

Strangely, Williamson appears not to be quite sure whether his latest work is meaningful or not – or at least seems unwilling to commit himself on the matter. There is (don't you think?) just a touch of arrogance in this passage (from Chapter One of Modal Logic as Metaphysics)?

"This book compares necessitism and contingentism. Which is true? Of course the question has a false presupposition if the definitions of 'necessitism' and 'contingentism' lack meaning or content. But if every enquiry must first establish its own meaningfulness we are on an infinite regress, since the enquiry into the meaningfulness of the previous enquiry must first enquire into its own meaningfulness, and so on. Better to act on the assumption of intelligibility: readers can decide for themselves whether they understand the book as they go along, and recycle it if they don't."

This passage is a combination of facile reasoning and rhetorical sleight of hand. By using the word 'understand' in the final sentence, he subtly shifts the focus to the reader's possible inadequacy and away from the original question concerning the work's meaningfulness.

In fact, I am tempted to see Williamson's work as emblematic of a broader trend. On the basis of my (admittedly limited) knowledge of the history of the relevant intellectual cultures, I discern, since the middle years of the 20th century, a disturbing falling off in intellectual seriousness in secular circles accompanied by an equally disturbing rise in anti-scientific name-calling and credulity amongst those thinkers who remain favorably disposed towards religion.

I'll finish here with a few comments about Paul Horwich, Williamson's great philosophical antagonist, whose deflationary views on truth I have referred to favorably in the past.

Horwich is opposed to the sort of traditional theoretical philosophy ('T-philosophy') which Williamson defends. I have made the point that, though I broadly accepted Horwich's account of truth, I doubted that his Wittgensteinian view of philosophy was compatible with a continuation of philosophy as an academic discipline. And, interestingly, Williamson makes a similar point in the recent interview.

"...Horwich didn’t explicitly call for T-philosophy not to be funded. I pointed out that if the picture of philosophy in his book were accurate, philosophy should be abolished. The reader encounters just two sorts of philosophy: irrational T-philosophy, and level-headed Wittgensteinian debunkers of T-philosophers. Philosophy is presented as an activity in which some people make a mess and others clear it up. Why on earth should taxpayers fund that? It looks as though we’d be better off simply abolishing the activity altogether."

Finally, I was surprised (and a bit disappointed) to learn recently that Horwich rejects naturalism, and even more unequivocally than Williamson does. He cites not only mathematical but also moral claims as a basis for his view.

Horwich is more thoroughly Wittgensteinian than I had previously thought.

Wednesday, August 28, 2013

Ideology and science

Science is not – nor can it be, in fact – immune to ideological influences. Sometimes such influences may have a positive effect, but it would be naive to believe that such factors do not have the potential to cause distortions also.

Scientists, like anybody else, need to be motivated and often this involves them seeing their own research as defending or furthering broad convictions they might have about human nature or the world in general.

There are many cases of great scientists whose major contributions to science were largely inspired by what we now see as utterly false assumptions. Copernicus and Newton might both be seen as examples of this, their discoveries as it were transcending the flawed intellectual matrix – or worldview – within which the theories were framed.

The institutions and practices of modern science are not designed to screen out personal biases and unwarranted assumptions so much as to ensure that published conjectures and theories and experimental results are exposed to rigorous testing and assessment procedures. The system works pretty well on the whole, encouraging intellectual rigor while not excluding the human element – imagination, creativity, etc. – which is essential for innovative thinking.

Areas such as evolutionary biology and the human sciences are particularly prone to ideological influences.

I have previously hinted at such influences in the case of research into linguistic development and evolution, notably in relation to the work of Michael Tomasello and his colleagues who seem to be adamantly opposed to certain formal approaches to the study of language. I am following up on this, and will have more to say in the future. (James Hurford's views appear to chart a sensible middle course, and are looking very plausible to me at the moment.)

And I have recently come across another example of ideology apparently driving scientific judgment and interpretation.

Last week Massimo Pigliucci published a list of his 'best' research papers on biological topics. It's clear from this list (and another on his Curriculum Vitae) that Pigliucci had from the beginning of his research career a special interest in defending and promoting the notion of phenotypic plasticity – the property of the genotype to produce different phenotypes in response to different environments.

In just about all the cited papers – most involving experiments with plants – the power of environmental factors to alter features of the organism are emphasized. A cursory look at the abstracts certainly suggests that the researchers (the papers are collaborative efforts) are highly unsympathetic to any approaches which could be construed as tending in the general direction of what has sometimes been characterized as genetic determinism.

Which is fine. It's only to be expected that researchers will approach such issues with strong opinions, and a degree of adversarial debate and discussion can be productive. In the end, the weight of evidence usually settles disputes, and the controversies then move on to other areas.

So I am not questioning the scientific value of Pigliucci's work – the scope and nature of phenotypic plasticity is clearly a topic of considerable interest.

But it is interesting to juxtapose his research interests in biology with his published comments about human intelligence.

In another of his recent blog posts, Pigliucci claims that environmental – cultural, in fact – factors are solely responsible for differences in patterns of involvement by males and females in different research areas. Genes don't have anything to do with it, apparently.

"[T]he fact," he writes, "that there are fewer women than men in a given field is likely the result of a large number of cultural factors (no, I don’t think it has anything at all to do with “native” intelligence, Larry Summers be damned)."

A commenter makes the point that "the greater variance of male intelligence is well established", and that genetic factors are obviously involved. The greater variance of male intelligence in this context means essentially that there is a greater proportion of individuals with very high intelligence amongst men than amongst women (and also a greater proportion of individuals with very low intelligence).

It is not impossible that some purely environmental explanation for this pattern could be found, but the evidence, even if it is not conclusive at this stage, certainly points to an at least partly genetic explanation. So the fact that Pigliucci seems to have a very strong disinclination to accept that genetics is significant here clearly goes beyond the science and points to a prior ideological commitment.

The emotional tone of his references to Lawrence Summers may not strengthen but certainly doesn't weaken my case. "I can't stand the bastard," Professor Pigliucci notes in a comment.

Pigliucci's strong ideological and moral convictions – which no doubt played a part in his decision some years ago to shift his focus from science to philosophy – may be able to be explained largely in terms of cultural factors.

But I just can't help thinking about Massimo's (hypothetical) monozygotic twin who was raised by a Swedish family. Did he too follow a scientific career? Does he have a penchant for bow ties? Is he a religious skeptic? Does he too have strong views on political and social questions? And what is his attitude to Lawrence Summers, I wonder?

Sunday, August 11, 2013

Life, death and computation

I have been spending a bit too much time lately reading other people's blogs and (to some extent) participating in associated discussions. The main problem with this sort of activity is that – largely because the focus of discussions is always shifting – it encourages superficial debate at the expense of deep understanding.

But, interestingly, two recent blog discussions on two very different sites which I happen to follow touch on a similar theme.


Biologist and philosopher Massimo Pigliucci recently precipitated a freewheeling discussion of the relevance of computers and computing to understanding the human mind and the universe in general. In fact, Pigliucci's post on the topic prompted more than 200 comments, many of which are well worth reading.

Professor Pigliucci has a disarming tendency to rush in where more cautious academics fear to tread – that is, beyond his areas of specific expertise. (I suspect his approach owes something to the intellectual traditions of his native Italy, where academics have traditionally played an important role in the broader cultural, moral and political sphere.)

Pigliucci argues strongly against functionalist and computational views of the mind. I don't have strong views on this question, though I share Pigliucci's skepticism about some of the (as I see it) wilder claims about mind uploading and the scope of simulations etc.

I did, however, question his contention that seeing the operations of nature in computational terms is likely to lead to mathematical Platonism, commenting as follows:

My understanding is that many of the leading proponents of an information- and information processing-based approach to physics see information as physical. The bits or qubits are always 'embodied' in actual physical processes, albeit that these processes are understood at a deep level in terms of the processing of information. (There are close parallels between information theory and thermodynamics.)

So I'm not sure that such a view leads to Platonism. Seeing physical processes as algorithmic (and scientific theories as predictive algorithms) seems to me a genuinely interesting perspective: but it may well be that there is no way actual physical processes can be perfectly simulated (or predicted).



Adrian McKinty is a novelist with a strong interest in social, cultural and philosophical topics. In the comment thread of a post at McKinty's nicely named site, The Psychopathology of Everyday Life (I know – Freud got there first), a post about Philip Larkin featuring his confronting poem, 'Aubade', McKinty mentions Nick Bostrom's simulation argument: that if we accept two fairly plausible-seeming assumptions then our universe is almost certainly a 'simulated' universe created by an advanced civilization.

As I commented there:

I am ... (prompted by your comments, Adrian) having a look at Nick Bostrom's ideas. My initial attitude is skepticism, but that may just be what he would call my status quo bias jumping in.

I do think it makes sense (simply in terms of physics) to see natural processes in terms of information processing, but it is a big jump from there to thinking about beings who might have set the process going (and to calling it a simulation).

And what would Larkin make of all this? (Turning in his grave, I suspect.)


I am continuing to look into the simulation argument which I first encountered some years ago. More later, perhaps.

But regular readers will know that I am very skeptical of arguments and points of view which take their origins from a philosophical (as distinct from a scientific) base. Bostrom's main argument for the simulation hypothesis is in part statistical but basically philosophical – and far from convincing from my point of view.

I can't help feeling that people like Bostrom (and David Pearce who influenced him) are driven by a kind of religious instinct. Certainly some of the groups with which they are associated have a cultish feel.

The other thinker mentioned by Adrian in the comment thread is Samuel Scheffler. Scheffler applies 'what if' scenarios to thinking about death. What if we knew the world was going to be destroyed soon after our death? His general point seems to be that we are underlyingly less concerned about our own personal fate per se than about our fate seen in the light of a continuing social context.

This may well be, and such thinking is very much in accordance with the view that the sense of self derives from the linguistic, cultural and social context in which we grow up. But I think Scheffler overplays the extent to which future generations give meaning to our lives.

Also, I had a look at Scheffler's background. And it seems pretty clear that his being a socialist (he is apparently a disciple of the 'analytical Marxist' G.A. Cohen) would have – to some extent at least – shaped and played a part in his approach to thinking about the future in general, and about ethics.