Pages

Tuesday, July 22, 2014

Faint possibilities

I have identified a possible flaw in a thought-experiment-based argument I posted here ('Kirk dies') some time ago. I wrote:

"We willingly go to sleep at night. We willingly get anesthetized for an operation. We might also be happy to go into 'cold storage' for a long space journey or to survive a devastating catastrophe on earth (a 'nuclear winter', for example).

But, what if, though we could be certain the hibernation device would not fail to keep our body alive and in a resuscitatable state, we just did not know whether or not it would ever get around to waking us up?

Going into such a device becomes exactly equivalent to a game of Russian roulette. Death (as in the death of the body) is functionally equivalent to not waking up, ever. All the death of the body does is make it impossible ever to wake up. It takes away hope.

But, from the point of view of the unconscious person, hope – or any kind of expectation – is irrelevant. So the experience of death is equivalent to the experience of going into a state of unconsciousness – nothing more."

The problem (as I now see it) is that I was overlooking the fact that a body which is revivable must, like any living thing, be in some sense sentient.*

A person requires consciousness as well as a social context, etc., and so is much more than a sentient body or body components. But the latter is the basis and sine qua non of the former and – crucially – is what makes me me and not you.**

The higher functions of my brain (or my 'mind') fade in and out, and may go permanently before my body fails. Once they are gone, I (as a functioning person) am gone. But in another sense I live on in my separateness until the death of the body.

In a fundamental sense, basic sentience is a far more interesting phenomenon than consciousness or self-consciousness, as it is the root of the latter. If there is a deep mystery in the universe, sentience is it. The primitive organism in a rock pool attracted to the warmth of the sun: this is what is most remarkable.

A human body, then, in some imagined technology-induced kind of super-hibernation would be unconscious but sentient (probably in many ways). And such a 'sleep' – even if extended indefinitely – is not at all equivalent to death.

I thought I had demonstrated (assuming that a functioning human body is entirely physical, i.e. soulless) that there can be no afterlife, no waking up (as it were) after death.*** But my little thought experiment was, I think, fatally flawed.

All sorts of possibilities – especially in an infinite universe (or multiverse) with multiple copies and so on – remain in place (faint though these possibilities may be).



* I am assuming here that any process which completely shuts down the functioning of organs, cells etc. would not be reversible: it would in effect kill the body.

** I know there are difficult questions about what it is exactly that confers identity on an individual or what identity consists of or in which I'm skirting here, but (if one rejects a Cartesian view) the living body is clearly basic.

*** Looking at things from a first-person perspective helps to keep these sorts of discussions grounded. Look, for example, at the sort of question that a dying person might ask him/herself. Something like: "Is this the end, the end of all experience (for me)?" I put the 'for me' in brackets because the burden of the question lies elsewhere. The dying person is not interested in me-ness but in whether or not there are going to be any more experiences. One can imagine – looking forward – waking up as oneself at another (say, earlier) stage of life, as someone else entirely or even as a giant cockroach – and I think these imaginings are at least coherent. (Kafka's giant bug had psychological continuity with the man who suffered the transformation. My cockroach doesn't (necessarily), and nor does my someone else. Thus the inserted clause, 'looking forward'.)

Tuesday, June 17, 2014

On modern human origins and the emergence of complex language

Though much about the movements, migrations, interactions and material culture of early modern humans remains uncertain, rapid progress is being made by researchers. Questions concerning the non-material culture of our ancient ancestors, however – and, in particular, concerning their languages or modes of language-like communication – are far more problematic. What follows are a few reflections on what, in general terms, we know, and what the prospects might be for learning more.


Our ultimate African origins are not in dispute, but there are still fundamental disagreements between supporters of models which see modern humans as having migrated (more or less recently) to other continents, replacing other hominins in the process, and supporters of versions of a multiregional hypothesis who see the evolution of modern humans from earlier forms not just as an African but as a worldwide phenomenon involving significant interbreeding between different kinds of hominin, complex gene-flows and a number of regional continuities dating back at least 200,000 years.

Despite these disagreements it is, I think, becoming increasingly clear that the recent African origin model, the view that modern humans arose as a new species in Africa and migrated to other continents around 60,000 years ago, replacing existing human species in the process, is at best an oversimplification. For there is now firm genetic evidence that interbreeding occurred between modern humans and Neanderthals in Europe and between modern humans and Denisovans in Asia, as well as evidence that migrations of modern humans occurred more than 100,000 years ago. New versions of the 'out of Africa' model – which push back the dates of migrations and take into account interbreeding between different human groups – bring it closer to a multiregional model, though any consensus is still a long way off.

A recent University of Tübingen research project exemplifies how the African origin model is changing. The study focuses on modern humans who migrated east via the Arabian peninsula area where stone tools dating from more than 120,000 years ago have been found. Two significant migrations – a very early one (ca. 130,000 years ago) along the southern coast of the Arabian peninsula and a much later one via a northern route – were hypothesized, and the researchers' models predict in general terms the actual data (skull measurements and genetic data) of population groups currently living in the Asia-Pacific region.

According to Hugo Reyes-Centeno, a leading member of the Tübingen team, Aboriginal Australians, Papuans and Melanesians were "relatively isolated after dispersal along the southern route" and other Asian populations were largely descended from groups migrating much later (about 50,000 years ago) along other routes, the main one going via the north of the Arabian peninsula and northern Eurasia.

These results need to be treated with caution, however, as the data on which the models are based are necessarily extremely limited and incomplete. The results need also to be integrated with other data, including, for example, findings which indicate that Denisovans, who were widespread in Asia during the Late Pleistocene, contributed 4-6% their genetic material to present-day Melanesians.

The Denisovans were named after a cave in southern Siberia where a finger bone fragment from which DNA was able to be extracted was discovered. Geneticists have now managed to sequence the entire Denisovan genome to a high degree of accuracy.

Though closely related to Neanderthals, Denisovans seem to have interbred with an unidentified species and picked up some of their DNA. "Denisovans," claims David Reich of the Harvard Medical School in Boston, "harbour ancestry from an unknown archaic population, unrelated to Neanderthals." One possibility is that these scattered DNA fragments (which constitute only about 1% of the Denisovan genome) derive from H. heidelbergensis who lived in Europe and western Asia between about 600,000 and 250,000 years ago. Another possibility for the source of the archaic genes is Homo erectus.


While new archaeological and genetic evidence about the early history of humanity continues to accumulate and the broad outlines of a plausible story are beginning to fall into place, making progress in understanding linguistic (and many other cultural) factors will be difficult. Sure, archaeological findings may throw some light on questions concerning where and when complex languages first developed amongst human populations and also on the vexed question of whether Neanderthals used complex languages. For example, there is strong archaeological evidence that behavioral and cultural changes occurred amongst modern humans about 50,000 years ago, and this may well suggest that it was at about this time that human languages similar in structure and more or less equivalent in complexity to languages spoken today first appeared. Also, evidence of subtle genetic changes – relating to the FOXP2 gene, for example – may yield clues about which populations were capable of complex language and which were not.

Theories of culture-driven gene evolution tend to support the idea that humans developed language in a piecemeal but not necessarily always gradual process. The basic notion is that the existence of some form of primitive spoken language (without complex syntax or an extensive lexicon) may have created a cultural environment in which certain small genetic changes – e.g. in the FOXP2 gene which is important for (amongst other things) the fine motor control of vocalizations – could have had huge evolutionary advantages and so spread rapidly, prompting further cultural changes which in turn would have facilitated further genetic change, and so on.

But, in the final analysis, anatomical, genetic and broader archaeological findings will only ever be able to answer very general questions about culture and language (and only to a certain degree of probability) and it is difficult to see how specific questions concerning the nature of very early languages, or questions concerning to what extent particular groups such as Neanderthals or Denisovans developed their own languages or adopted (modified versions of?) the languages developed by modern humans, could move beyond the realm of speculation.

So, even if it could be argued convincingly on the basis of archaeological and genetic evidence that a certain population (modern or Neanderthal) at a certain time was extremely likely to have used a language of comparable complexity to today's human languages, the content of such a claim would necessarily be rather thin – and indeed linguistically vacuous – if any knowledge of the nature of that language is (and must remain) inaccessible to us.

In a recent post I referred to claims made by researchers at the Max Planck Institute that linguistic contacts with Neanderthals may have left discernible traces in the structure of non-African languages. However, given the time-frames involved and the fact that we only have access – and will only ever have access – to a minuscule fragment of the relevant linguistic data, it seems highly unlikely that even the most sophisticated computational approaches will be of much use. The researchers' claims are intriguing but, I would say, far too optimistic about what the sorts of approaches they are proposing could actually achieve.

It needs to be borne in mind that the earliest true writing systems for natural languages for which we have evidence date only from the 3rd millennium BC. Educated guesses and speculations about Proto-Indo-European, the hypothesized language from which the Indo-European language family (which includes Sanskrit, Persian, Latin, Greek and the Romance, Germanic and Slavic languages) is seen to derive, take us further back, but only to about 4000 BC.

Can knowledge of the languages spoken in recent times by preliterate peoples take us further back? Probably not. Though many of these languages have been recorded and analyzed, it would be a mistake to assume, even if the associated material cultures have been relatively stable for (in some cases) tens of thousands of years, that the languages themselves have exhibited anything like a similar stability.

The history of the human languages for which we have no historical written records (usually because there was no writing system but sometimes because written records have survived only in a fragmentary state or not at all) can only be hypothesized, largely on the basis of the elaborate comparative methods devised by philologists in the 19th and early-20th centuries coupled with general speculations about the speed and nature of linguistic change and its relation to broader social and cultural changes.

So it seems clear that, while broad evolutionary developments and migrations may eventually be able to be mapped with a high degree of confidence, the cultures of our ancient, preliterate forebears will only ever be able to be characterized in very general terms. Gaining substantive knowledge of the content of their cultures and belief systems, as of the actual (as distinct from the possible) nature and structures of the languages upon which these cultures were built and depended, lies forever beyond our grasp. The evidence just isn't there.

Monday, June 2, 2014

Neanderthal language

There has been a lot of talk in recent years about the intellectual and linguistic capacities of Neanderthals. This brief overview published last year in the research news section of the website of the Max Planck Institute reflects the currently popular view that their capacities were more or less equivalent to ours.

More controversial is the claim that the origins of modern language date not from about fifty or a hundred thousand years ago but from about one million years ago, "somewhere between the origins of our genus, Homo, some 1.8 million years ago and the emergence of Homo heidelbergensis." [Homo heidelbergensis is thought to represent our most recent common ancestor with Neanderthals, the split occurring about 500,000 years ago.]

More controversial still are claims being made by researchers at the Institute – and publicized in a recent New Scientist article [paywall] which is big on speculation but largely devoid of substantive content – that the cultural interactions between modern humans and their Neanderthal cousins included linguistic exchanges which left discernible traces in the syntax of non-African languages. Possible subtle structural differences between African and non-African languages coupled with detailed computer simulations of language spread would supposedly reveal something about the structural properties of hypothetical Neanderthal languages (which hypothetically impacted on non-African languages only). This is drawing a very long bow.

In a few days I will post some notes and reflections on some broader questions about early humans and the prospects for making progress in understanding their cultures and the nature of their languages.

Wednesday, April 16, 2014

A spectrum of sorts

General talk about views of the world can be very frustrating and unproductive. But reading this piece about the incompatibility between science and most forms of religion (and particularly the associated comment thread with its predictably divergent views) has prompted me to make a few general observations of my own.

The problem is not just that words like 'religion' are vague, but also that more technical terms like 'physicalism', 'naturalism', 'idealism', 'empiricism' and 'rationalism' are also understood in different ways by different people. Countless scholarly articles have been written defining, redefining, defending or attacking particular positions. I may have another look at some of this literature soon, if only to review and refine the terms I use to define my own stance.

But I think the issues that really matter can be set out fairly simply in the form of a continuum. Such a basic, one-dimensional picture cannot, of course, begin to cover all angles or possibilities but it does allow one to represent in a plausible and useful way some of the most important differences in the way people see the world.

At one end of the continuum you have people who don't see any justification for believing in the existence of anything other than the sorts of things with which science (and mathematics) is – at least potentially – equipped to deal, whether one is thinking of the fundamental structures and processes addressed by physics or the more complex structures and processes dealt with by other areas of science.

What people at this end of the continuum reject is the notion that in addition to the reality (or realities) studied by the sciences (including the social sciences) there is some other reality not amenable to science which impinges on our lives. Like a spiritual realm, or a transcendent moral realm, or some form of 'destiny'. The crucial issue here is that scientific approaches do not reveal behind the phenomena of the natural world (or in fact appear to reveal the absence of) any underlying purpose or goal or enveloping moral reality.

At the opposite end of the continuum you have people who embrace a view of the world which purports to go beyond the science and which incorporates spiritual or supernatural or teleological or transcendently moral elements.

At the extreme are believers in spiritual or supernatural forces which can override normal physical laws. Most well-educated religious people today, however, accept that the physical world operates as described by science and that the spiritual or supernatural realm with which their religious beliefs are concerned is – must be – quite compatible with scientific reality. Such sophisticated believers could be seen as embracing both naturalism and (a subtle form of) supernaturalism. Or, looked at another way, a natural world which is embedded in a broader, all-encompassing reality.

More towards the centre of the spectrum are those who claim to reject all forms of supernaturalism but who also reject the hardline scientific view as narrow and impoverished. Advocates of process theology (or process philosophy) come to mind in this connection, but, though they claim to reject supernaturalism and embrace naturalism, theirs is a form of naturalism which goes well beyond the usual understanding of the term.

Ordinary agnostics, who are prepared neither definitively to embrace nor to reject spiritual possibilities, would also find themselves somewhere in the centre.

The central part of the spectrum is admittedly a very ill-defined and perhaps unstable area. It is characterized more by what the individuals involved don't accept than what they do, and I tend to want to interpret their positions as at least tending one way or the other. Process thinkers, for example, for all their explicit rejection of supernaturalism, clearly tend to the religious end of the spectrum. Others, who might maintain links with religious rituals for merely social or cultural reasons for example, tend in the opposite direction, as their actual beliefs may not differ much at all from those who explicitly embrace a hardline, science-oriented view.


On a related matter, it can be argued (on historical, sociological and logical grounds) that philosophy and religion are intimately linked and, though I won't elaborate on that idea here, I think it's worth remarking that a large (and, in America at least, increasing) number of philosophers are not only anti-scientistic but also religious.

Ludwig Wittgenstein was a prominent and interesting example, not least because of the huge influence he has exerted and continues to exert. He kept his religious orientation pretty much to himself. But it was there – and it clearly motivated his philosophical thinking.

As well as his private notebooks, we have detailed accounts by a number of Wittgenstein's friends to support the view that he had strong religious tendencies and commitments. Patrick Drury's recollections are particularly important, and Norman Malcolm (another close friend) explained Wittgenstein's vehement rejection of scientism in terms of his religious orientation.

Henry Le Roy Finch has made the point that Wittgenstein was throughout his life a supernaturalist in the mould of Pascal and Dostoievsky. As well as explaining the tenor of his thinking in many areas, this religious orientation also led – more than any other single factor – to his falling out with Bertrand Russell. The gulf between their basic outlooks was just too great.

This view accords well also with that of Ray Monk who has written intellectual biographies of both men, and who, in a lecture I heard him give some years ago, emphasized not only the absolute contrast and utter incompatibility between Russell's secular outlook and Wittgenstein's essentially religious view of the world, but also the way their respective views permeated their philosophical thinking. (Monk identifies very strongly with Wittgenstein's general outlook – and does not hide his distaste for Russell's.)

Thursday, March 20, 2014

Science as a way of seeing

Attitudes to science and attitudes to language are often related. Many science-oriented people are 'linguistic revisionists'. They have a low opinion of ordinary language (because of its vagueness and ambiguity) and seek to reform it or replace it wherever possible with various formalisms. Conversely, a negative attitude to science and mathematics and logic is often evident amongst lovers and respecters of natural language (especially in literary circles for example).

But there is no reason why one cannot combine a passionate commitment to a scientific (even scientistic) view of the world with a profound respect for natural languages – these curious products of biological and cultural evolution – as objects or systems and with a recognition of what these systems are uniquely equipped to do.

To complicate matters, it's also possible to combine a commitment to the formal sciences with a passionate hatred for the physical sciences. This is a not uncommon position, actually, but one I will not deal with here.

What follows, then, are some preliminary and loosely connected notes on the differences between broadly scientific and other modes of thinking, seen in relation to language.


Reasoning and deduction can, of course, be framed in formal terms, and even natural languages can, to an extent, be seen as interpreted formal systems.

Such formal logical approaches – which don't come naturally to most of us – represent a limited but (paradoxically) revealing perspective, rather like an X-ray image, or a monochrome drawing (a landscape, say).

They have their own beauty, these approaches, but it is a spare beauty which derives from abstraction, from leaving things out – like soft tissue in the case of the X-ray, or colour and smell and sound and movement and a third spacial dimension in the case of the drawing.

Revealing and beautiful – and also useful. It was this mode of thinking that gave rise to mathematics, science and technology. And, in the mid-20th century, habits of abstract and reflexive thought finally brought formal systems themselves to life in the form of the digital computer.

But computers, as embodiments of formal thinking, suffer the limitations of formal thinking, and are not well-equipped to deal with the rich parallelism of human perceptions or the tacit knowledge implicit in ordinary human actions and interactions and language use. Their strengths are our weaknesses and their weaknesses our strengths.

What is most notable about normal human brains – in stark contrast to machine intelligence – is their remarkable ability to deal with non-abstract things, and, in particular, with the hugely complex sensory and social realms; in conjunction, of course, with natural language, the bedrock of social life and culture.

Human languages are in fact quite remarkable in their capacity for expressing the subtleties of psychological and social experience. I don't much like the word 'literature'; it's a bit stuffy and pretentious but it's the only word we've got in English that picks out and honors, as it were, texts which explore and exploit this capacity.

The word 'letters' worked in compound expressions in the relatively recent past ('life and letters', 'man of letters') but is now quite archaic. 'Belles lettres' even more so.

The adjective 'literary' is, however, neither pretentious nor archaic, simply descriptive. It can be a neutral indicator of a specific context of language use. Or it can be used to designate (often pejoratively, it must be said) a particular style or register of language use (in contrast to technical or plain or straightforward or colloquial language, for example).

In the early 20th century, the linguist (and one-time student of Ferdinand de Saussure) Charles Bally saw the need to expand the scientific study of language to encompass the subjective and aesthetic elements involved in personal expression. His notion of stylistics was further developed by thinkers associated with the Prague school – most notably Roman Jakobson, who listed the 'poetic function' as one of the six general functions of language.

[I am always wary when scholars make numbered lists of this kind (suspecting that reality is rather less amenable to clearcut categorization than the scholars would wish).

Though his overview of linguistic functions is harmless enough, Jakobson did in fact have a tendency to drive his more technical insights too hard and too far. On markedness and binarism, for instance. But that's another story.]

On the question of the possibility of a satisfactorily scientific study of style I am undecided.

Certainly, the importance of stylistic elements in actual human communication is often underestimated and communication failures are often the result of stylistic rather than substantive issues. The aesthetic element is also important in its own right (as Jakobson saw).

But scientific approaches are characterized by their narrow focus and abstractness: by what they leave out. And what they leave out is generally the subjective or experiential side of things. Twentieth century phenomenologists and others tried – and failed – to reinsert into the scientific view what had been omitted.

A supposedly 'scientific' approach (phenomenological or otherwise) could never really replace, as I see it, the informal 'close reading' of a text or spoken exchange (for example) by a perceptive reader or listener who was well versed in the language and culture (or sub-culture) in question.

Was a particular characterization plausible or a given piece of dialogue convincing? Was a particular remark witty or just sarcastic or rude? Was someone being condescending in what she said, or kind (or both condescending and kind)?

Often the answers to such questions will depend not only on non-verbal and para-linguistic factors but also on the subtle connotation of a word or turn of phrase.

Logical languages (like the predicate calculus) strip these psychological and emotional and aesthetic elements away; and all scientific language – even in the social sciences – aspires to denotation, pure and simple.

As I started out by saying, that spare, direct approach has its own beauty which stems above all from its power to make us see in a more direct and culturally unencumbered way.

You can interpret the scientific way of seeing things (which goes beyond science as such) in an almost mystical way, in fact: as a means of 'cleansing the doors of perception', of temporarily sloughing off the necessary – and necessarily arbitrary – cultural baggage of social existence.

Monday, February 24, 2014

Death and the sense of self

This is a postscript to some previous discussions on death, human identity and 'the phantom self'.

These issues are quite maddening because one feels they should be simple. But (certainly as philosophers like Derek Parfit present them) they don't seem so.

I have given my (provisional) views on all this, and one of my conclusions is that Parfit's suggestion that day-to-day survival is not what it seems, being virtually equivalent to dying and having an exact copy live on, is just wrong.

Sure, the notions of the self and identity are problematic, but our struggle for (bodily) survival is at the heart of things, surely. We know what it is to go into an unconscious state and subsequently wake up. And we can imagine – not waking up! (Foresee our own actual death, in other words.)

However, having had various private discussions on this matter, I recognize that some people see it differently from me and would be happy enough to have their bodies destroyed so long as an exact copy survived.

"But look at it from your point of view," I would say. "You go into the (transporter) machine, get scanned, lose consciousness, and that would be that. You wouldn't 'wake up' as the copy (or one of the copies if there were several). You wouldn't wake up at all. Ever. Whereas, of course, for other people 'you' would still be there. Your wife would not have lost her husband, etc.. But you would have lost your wife – and everything else."

"But this you you talk about, what is it? You speak as if it's a soul or an essence..."

Which I of course deny. But I see that my interlocutor just doesn't get what I am saying, and I begin to wonder if I am making sense.

People see these matters very differently, and I suspect that one of my interlocutors may have given an explanation of sorts when he said, "Some people just have a stronger sense of self than others."

Those with a stronger sense of self, then, would be less likely to identify that self with any copy, however exact.

You could also plausibly see a strong sense of self as being associated with a strong survival instinct (and/or egoism), and a weaker sense of self with a less-strong survival instinct. But the crucial question is: how does this translate into truth claims?

It could be that a weaker sense of self tends to obscure – or blur – the simple (and tragic) truth about death. Then again, perhaps a strong sense of self and survival instinct leads one to underestimate the equivocal and tenuous nature of the self.

The human self is a complex – and indeed tenuous – phenomenon, based as it is on cultural and social as well as biological factors. But tying its fate to the fate of the body does not entail identifying it exclusively with the body in any simple way. For the self depends on the body, even if it also depends on other things. And when the body fails, it fails.


A couple of final comments of a more general nature.

It seems clear that a straightforward scientific approach doesn't seem to work on these problems of death and identity just as it fails to work on other typical philosophical problems – like free will. Could this have something to do with self-reference?

The major paradoxes of logic are self-referential, and the problems being discussed here (and the free will problem also) have a self-referential element.

And though self-reference in logic doesn't relate to a human self but just to concepts turning back on themselves (like a set being a member of itself), there does seem to be a parallel that may help to explain the intractability of these sorts of questions.

The problems (or limitations) may, in other words, be logical as well as psychological (and so deeper).

Science aspires to an objective, third-person point of view or 'view from nowhere'. It is not undermined (though perhaps dogged at a fundamental level) by those self-referential logical paradoxes. And it can readily explain (albeit from a general, objective point of view) how first-person perspectives arise in nature – and much about them.

The first-person point of view is fine, in fact – until it starts to reflect on its own nature and make (science-like) claims about itself.

Tuesday, January 28, 2014

Nouny nouns

Most of us come up with ideas which we think are good but which we don't develop or exploit. Ideas for making money or doing good, or – as in the case I am about to describe – ideas which have absolutely no possible commercial or practical applications.

Typically, we discuss these bright ideas with trusted friends or family members and get discouraged when our interlocutors are less than overwhelmed.

So let me recycle here (to the extent that I can reconstruct it from memory) one such idea which was effectively discouraged by an old academic friend and colleague whose views on the matter I may have taken a shade too seriously. Or not, as the case may be.

It relates to the topic of animism, which I raised in my previous post on this site.

There I talked about the so-called 'mind projection fallacy' discussed by Edwin Thompson Jaynes. He talked about evidence in ancient literature and pointed out that the fallacy in question would have long pre-dated written records.

We have anthropological evidence for something like Jaynes's mind projection fallacy from studies of various non-literate cultures, but my idea was to look for evidence in the structure of language.

For our natural tendency to project human-like intelligence into non-living and non-human nature is obviously reflected in various ways in the grammar and morphology of the languages we speak or know about, and these languages (would have) not only reflect(ed) but also facilitate(d) animistic modes of thinking.

You find traces of animism even in modern English idioms such as 'the wind blows', but grammatical analysis of both verbal and nominal forms takes us much further back in time.

My intention was to focus on nouns. Willard Van Orman Quine speculated (in his Word and Object as I recall) that the most basic form of noun was the mass noun – like 'sand' – rather than the count noun – like 'hill'. The former doesn't need an article ('the' or 'a'), the latter does.

But, counter to Quine's speculations, it can in fact be demonstrated by looking at the potential for inflection – grammatical suffixes and so on – of various kinds of noun in a range of languages within the Indo-European family that the prototypical noun – the 'nounier' noun if you like – is the count noun rather than the mass noun; and, of the count nouns, animate nouns are nounier than inanimate nouns; and nouns relating to humans or human-like agents are the nouniest of all.

My intention, then, was to elaborate and refine and draw out the implications of this fact: that for many languages – including some of the oldest linguistic forms of which we have any knowledge – the nouniest nouns are personal agents.

Perhaps this idea had already been developed by others at the time I first thought of it. Perhaps it has been discussed and developed more recently. Perhaps it is just not an interesting enough idea to bother with. Or perhaps none of the above applies.

Wishing, then, to maintain – at least for a little while – a state of blissful ignorance on the matter, I am deliberately postponing any scholarly delving.

I have also refrained from mentioning the name of the linguist (now in his eighties) whose work was my jumping-off point. If his name comes up in my (or anyone else's) searching it will suggest that the territory is still relatively virgin.