Last month I listed a number of topics (mainly linguistic, psychological and historical) I have been thinking about. The last item on the list was the most philosophical, relating to the challenge that mathematics can be seen to pose for empiricism.
Mathematics is often presented as a deep and interesting area of knowledge which is somehow independent of the empirical world. Well, it certainly is a deep and interesting area of knowledge, but it is also very much a product of physical brains interacting with the wider physical (and cultural) world. Sure, it operates at a very high level of abstraction; and, sure, many mathematicians are mathematical realists (or Platonists) who feel themselves to be exploring (and discovering things within the context of) an independently-existing and non-empirical realm.
But the old idea of a realm of pure mathematics without applications (promoted by mathematical Platonists such as G.H. Hardy) is looking increasingly forced and dated (i.e. tied to a particular cultural tradition). It's well known that ideas from pure mathematics often find subsequent – and unexpected – applications. Non-Euclidean geometries, for example, were originally developed in the 19th century as pure mathematics but subsequently found applications in cosmological theories.
What Hardy shied away from particularly, however, were technological applications. He would not have been happy that his own area, number theory, which he loved for its purity and uselessness, turned out to have important applications in computer science.
Cantor, of course, was also a Platonist. I was looking again at his 'diagonal argument' which shows that the set of real numbers is not countable (denumerable) – so that some infinite sets (as George Orwell might have put it) are more infinite than others.
But I always feel uneasy when infinities (even common or garden variety infinities like the sequence of natural numbers or the expansion of pi) are built into arguments. Cantor's argument is clever and convincing in a sense, but the (infinite) matrix on which it is based is merely imagined (or postulated, or projected).
I think I'm okay with mathematical procedures which involve an unending series of steps (potential infinity); but not with mathematical objects which contain an infinite number of elements (actual infinity).
So it seems that I am an intuitionist, but I can't really say at this stage whether I'm a finitist also. (The latter only recognize mathematical objects which can be constructed from the natural numbers in a finite number of steps).
Another topic I've been thinking about is panpsychism. What prompted my (renewed) interest was coming across a philosophically-oriented blogger with a PhD in theoretical physics whose nom de plume happens to be 'Panpsychist'. He had made some intriguing comments on a post by Massimo Pigliucci on reductionism in science and invited people to continue the discussion on his site. (Massimo has set limits on comments at Scientia Salon.)
I followed quite closely but didn't participate in the reductionism debate which was characterized by a certain degree of terminological confusion, specifically about the meaning and application of certain terms used in the philosophical literature, e.g. "token physicalism" (which is associated with "supervenience") and "type physicalism" (which is associated with "strong emergence").
Panpsychism is relevant to the topic of reductionism in that it can be seen as a way of getting around the problem of reducing mental properties to physical properties.
'Panpsychist' rejects dualism and also rejects the idea of emergence, the idea, as he puts it, "that mental properties emerge from certain configurations of regular, non-mental matter." He says that reading David Chalmers convinced him that the standard idea of emergence was wrong because it failed to address the 'hard problem' of consciousness. He also argues against the view that panpsychism is an essentially religious position.
Some time ago I considered and rejected David Chalmers' take on the so-called hard problem of consciousness as unconvincing and unscientific. Basic to his approach is imagining beings that look and behave just like humans but lack conscious awareness; logically coherent perhaps but utterly implausible both in terms of common sense and in terms of science.* Biological creatures have various levels or degrees of consciousness or awareness or sentience: that's just how things work. And imagining a world in which quite arbitrary – not driven by scientifically-based reasoning – differences apply is merely idle speculation. Also, Chalmers-inspired approaches tend (as I see it) to be too much focused on human consciousness rather than on more primitive – and basic – forms of awareness from which the former ultimately derives. The sentience of simple life-forms is where the real (philosophical) interest lies, in my opinion.
I have also in the past seriously considered and rejected panpsychism, but I do acknowledge that the curious spectacle of an apparently inanimate universe producing sentient and ultimately conscious organisms – and so in a sense becoming conscious of itself – does give one reason to ponder the possibility that consciousness (in some form or other) is fundamental (in some sense or other).
Looking again at these issues, I note that the revival in panpsychism in philosophical circles (prompted in part by Chalmers' work in the 1990s) is being driven largely by 'process' thinkers. (It used to be called 'process theology' but the 'theology' is generally dropped these days and replaced with 'thought' or 'thinkers' or nothing at all.) It all goes back to Whitehead – and ultimately to neo-Platonism, I suppose.
I tried to read Whitehead a couple of times, but found him rather vague and wordy and (unnecessarily?) obscure. It's not just that he had grown up when 19th-century philosophical idealism was at its zenith and had internalized old idealist assumptions and ways of speaking because I've read and found interesting the work of F.H. Bradley who was not only an idealist but had far less mathematical and scientific knowledge than Whitehead. I think perhaps Bradley had keener insights into human psychology than Whitehead and so was more grounded. He also had a better prose style, which is often a sign of groundedness.**
Despite not warming to Whitehead's work (or that of his followers), I do like the idea of seeing fundamental reality as process rather than 'stuff'. (The fact that matter and energy are functions of one another makes old-fashioned materialism unviable.)
This view fits in nicely with the idea of computation, and with seeing the cosmos as some kind of computational (or computation-like) process.
And finally, returning to mathematics, I see the natural numbers also in terms of process: namely iteration.***
* As I see it, logic derives from and is intimately related to mundane real-world and scientific reasoning. Logic may be an independent discipline but this does not entail that the subject of the discipline constitutes or forms the basis of some kind of alternative reality.
** Heidegger comes to mind here also: despite the excesses and idiosyncrasies, his language meshes with reality somehow (at least some of the time!).
*** As they are expressed, for example, in Church's lambda calculus. On the whole, Church is a bit too abstract (and Platonic) for me however.
▼
Monday, December 15, 2014
Sunday, November 2, 2014
Thoughts and questions
Here are some thoughts and questions which have come to mind recently. I will definitely be following up on some of them.
• Human communication is not what it seems. This is a fact which typically takes some time (and multiple relationship failures) for us to learn. Even relatively straightforward and sincere messages are routinely construed by recipients quite differently from how senders conceive them such that, while what is being sent and received at the level of symbols is the same thing (i.e. the same set of symbols), what is being sent and received at the level of interpreted symbols are different things.
• The contingent (and unrepeatable) features of any individual's upbringing – which includes as a central element a unique and ever-changing cultural matrix – raises awkward questions about values. We like to think of our core values as being, if not objective or universal, then at least as having some permanent or abiding relevance. But do they?
• Terms like 'moral' and 'ethical' refer to important aspects of human behaviour but I am inclined to think that ethics can only be usefully intellectualized when approached in a more or less descriptive way. Nietzsche in his more scientific moods is my model on this front. I don't know that normative ethics can ever be a coherent intellectual discipline – in part because making moral judgments and decisions is not just an intellectual matter. A very 'thin' kind of ethics based on notions such as reciprocity might be seen as a precondition for any kind of social life and so as uncontroversial, however.
• Is there – as Karl Kraus thought – a deep and intimate link between morality and how we use language? (I don't think so. Not in the way Kraus saw the matter, anyway.)
• What is the cause (and significance?) of the disappearance of the subjunctive and, more generally, of formal and literary modes of speaking and writing?
• What is the source and significance of that strange sense of dread and guilt which some people feel deeply and others don't feel at all? This is one of those many topics which you could approach via psychology or historically. The Etruscans were said to be particularly prone to such feelings, and certain strands of Christian thinking were driven by this sort of thing. There was, I recall, an obscure member of the Vienna Circle who wrote something on this topic (taking a psychological approach). Must look him up.
• The Idealist and Romantic notion of the spirit or genius of a language or a people generally makes far too much of linguistic and cultural groupings, imputing to them not only a life of their own but also a destiny to fulfil, a totally implausible – and very dangerous – idea which is still being energetically propagated today. Culture is crucially important, but clearcut cultural and linguistic boundaries between languages and cultures simply don't exist.
• In fact, the very notion of a (natural) language is problematic. Certainly it represents an abstraction from empirical reality. (Chomsky believes that there are only (overlapping) idiolects.)
• The early-20th century fad for constructed international languages: what was driving it? How did this movement – or competing set of movements – relate to other international movements of the time, like socialism for instance?
• The nature of mathematics. Can mathematics be fitted into a (broadly) empiricist epistemology and/or a naturalistic worldview?
• Human communication is not what it seems. This is a fact which typically takes some time (and multiple relationship failures) for us to learn. Even relatively straightforward and sincere messages are routinely construed by recipients quite differently from how senders conceive them such that, while what is being sent and received at the level of symbols is the same thing (i.e. the same set of symbols), what is being sent and received at the level of interpreted symbols are different things.
• The contingent (and unrepeatable) features of any individual's upbringing – which includes as a central element a unique and ever-changing cultural matrix – raises awkward questions about values. We like to think of our core values as being, if not objective or universal, then at least as having some permanent or abiding relevance. But do they?
• Terms like 'moral' and 'ethical' refer to important aspects of human behaviour but I am inclined to think that ethics can only be usefully intellectualized when approached in a more or less descriptive way. Nietzsche in his more scientific moods is my model on this front. I don't know that normative ethics can ever be a coherent intellectual discipline – in part because making moral judgments and decisions is not just an intellectual matter. A very 'thin' kind of ethics based on notions such as reciprocity might be seen as a precondition for any kind of social life and so as uncontroversial, however.
• Is there – as Karl Kraus thought – a deep and intimate link between morality and how we use language? (I don't think so. Not in the way Kraus saw the matter, anyway.)
• What is the cause (and significance?) of the disappearance of the subjunctive and, more generally, of formal and literary modes of speaking and writing?
• What is the source and significance of that strange sense of dread and guilt which some people feel deeply and others don't feel at all? This is one of those many topics which you could approach via psychology or historically. The Etruscans were said to be particularly prone to such feelings, and certain strands of Christian thinking were driven by this sort of thing. There was, I recall, an obscure member of the Vienna Circle who wrote something on this topic (taking a psychological approach). Must look him up.
• The Idealist and Romantic notion of the spirit or genius of a language or a people generally makes far too much of linguistic and cultural groupings, imputing to them not only a life of their own but also a destiny to fulfil, a totally implausible – and very dangerous – idea which is still being energetically propagated today. Culture is crucially important, but clearcut cultural and linguistic boundaries between languages and cultures simply don't exist.
• In fact, the very notion of a (natural) language is problematic. Certainly it represents an abstraction from empirical reality. (Chomsky believes that there are only (overlapping) idiolects.)
• The early-20th century fad for constructed international languages: what was driving it? How did this movement – or competing set of movements – relate to other international movements of the time, like socialism for instance?
• The nature of mathematics. Can mathematics be fitted into a (broadly) empiricist epistemology and/or a naturalistic worldview?
Monday, September 29, 2014
Dial M for Metaphysics
I had to laugh when I saw this advertisement for a kind of New Age one stop shop. You can certainly imagine people paying these folks to get hypnotic help to give up smoking or lose weight, or access other services like counselling or massage or even astrological guidance. But I just couldn't figure out why anyone would want to consult a metaphysician.
Of course, what these people mean by 'metaphysics' is very different from what academic philosophers mean by it. The term has long been appropriated by New Age types. What you might find in the metaphysics section of a bookshop, for example, would bear no relation to what academic philosophers mean by the term.
In fact, the very word 'philosophy' is becoming problematic. People have all sorts of notions of what philosophy might be, but few of them mesh with how academic philosophers see their discipline. And even academic views of philosophy tend to diverge alarmingly between different schools of thought and traditions and even between individuals.
I recently tweeted [@englmark] the above photo with a comment to the effect that the appropriation of such words by others may indicate that academic philosophy is losing the battle for the very terms which have traditionally defined it. This prompted the philosopher Massimo Pigliucci to come in and suggest that science and the sciences face similar problems. He cited so-called 'creation science' and the use (or misuse) by mystics of quantum physics.
But neither example is a complete appropriation and both merely represent attempts to use the prestige and respectability of science to promote religiously-driven points of view.
Unlike Massimo, I think that confusion over terms like 'metaphysics' and 'philosophy' may be indicative of more general problems facing the discipline, and that academic philosophy has suffered a significant loss of status in recent decades – especially in scientific and secular circles. (An essay of mine published earlier this month at Scientia Salon dealt with some of these issues.)
Of course, what these people mean by 'metaphysics' is very different from what academic philosophers mean by it. The term has long been appropriated by New Age types. What you might find in the metaphysics section of a bookshop, for example, would bear no relation to what academic philosophers mean by the term.
In fact, the very word 'philosophy' is becoming problematic. People have all sorts of notions of what philosophy might be, but few of them mesh with how academic philosophers see their discipline. And even academic views of philosophy tend to diverge alarmingly between different schools of thought and traditions and even between individuals.
I recently tweeted [@englmark] the above photo with a comment to the effect that the appropriation of such words by others may indicate that academic philosophy is losing the battle for the very terms which have traditionally defined it. This prompted the philosopher Massimo Pigliucci to come in and suggest that science and the sciences face similar problems. He cited so-called 'creation science' and the use (or misuse) by mystics of quantum physics.
But neither example is a complete appropriation and both merely represent attempts to use the prestige and respectability of science to promote religiously-driven points of view.
Unlike Massimo, I think that confusion over terms like 'metaphysics' and 'philosophy' may be indicative of more general problems facing the discipline, and that academic philosophy has suffered a significant loss of status in recent decades – especially in scientific and secular circles. (An essay of mine published earlier this month at Scientia Salon dealt with some of these issues.)
Saturday, August 16, 2014
Designing a conscience for warrior robots
You wouldn't normally expect to come across a reference to deontic logic in a Bloomberg opinion piece but a recent article on the perceived dangers and possible downside of artificial intelligence cites a paper [PDF] which, drawing on formal logical and ethical theory, proposes a method for creating an 'artificial conscience' for a military-oriented robotic agent.
The paper, by Ronald C. Arkin, "provides representational and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement." *
What interested me particularly was seeing basic logical and ethical theory being seriously discussed and applied in such a context.
Arkin sees virtue-based approaches as not being suitable for his purposes because they are heavily reliant on interpretation and on cultural factors and so are not amenable to formalization. Utilitarian approaches may be amenable to formalization but, because they are not geared to utilize the concept of human rights, do not easily accommodate the sorts of values and outcomes upon which the research is particularly focussed (e.g. protecting civilians or not using particular types of weapon).
So Arkin opts for a basically deontological approach, but a scaled-down version which does not purport to derive its rules or guidelines from first principles or from a universal principle like Kant's Categorical Imperative.
Arkin's recommended design would incorporate and implement sets of specific rules based on the just war tradition and various generally accepted legal and moral conventions and codes of behavior pertaining to warfare.
He points out that machines programmed on such a basis would be likely to be more reliably moral than human agents partly because they would be unemotional, lacking, for example, the strong sense of self-preservation which can sometimes trigger the use of disproportionate force in human agents.
The main problem as I see it is that, in general terms, the more morally constrained the robot is, the less effective it will be purely as a fighting machine and so there will be an ever-present temptation on the part of those who are deploying such machines to scale back – or entirely eliminate – the artificial conscience.
Although the need to maintain the support of a public very sensitive to moral issues relating to such matters as torture and the safety of non-combatants would lesson such temptations for the U.S. military and its traditional allies, it would be foolish to imagine that other players and forces less committed to applying ethical principles to the conduct of war would not get access to these technologies also.
* Arkin is based at Georgia Tech, and the research is funded by the U.S. Army Research Office.
The paper, by Ronald C. Arkin, "provides representational and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement." *
What interested me particularly was seeing basic logical and ethical theory being seriously discussed and applied in such a context.
Arkin sees virtue-based approaches as not being suitable for his purposes because they are heavily reliant on interpretation and on cultural factors and so are not amenable to formalization. Utilitarian approaches may be amenable to formalization but, because they are not geared to utilize the concept of human rights, do not easily accommodate the sorts of values and outcomes upon which the research is particularly focussed (e.g. protecting civilians or not using particular types of weapon).
So Arkin opts for a basically deontological approach, but a scaled-down version which does not purport to derive its rules or guidelines from first principles or from a universal principle like Kant's Categorical Imperative.
Arkin's recommended design would incorporate and implement sets of specific rules based on the just war tradition and various generally accepted legal and moral conventions and codes of behavior pertaining to warfare.
He points out that machines programmed on such a basis would be likely to be more reliably moral than human agents partly because they would be unemotional, lacking, for example, the strong sense of self-preservation which can sometimes trigger the use of disproportionate force in human agents.
The main problem as I see it is that, in general terms, the more morally constrained the robot is, the less effective it will be purely as a fighting machine and so there will be an ever-present temptation on the part of those who are deploying such machines to scale back – or entirely eliminate – the artificial conscience.
Although the need to maintain the support of a public very sensitive to moral issues relating to such matters as torture and the safety of non-combatants would lesson such temptations for the U.S. military and its traditional allies, it would be foolish to imagine that other players and forces less committed to applying ethical principles to the conduct of war would not get access to these technologies also.
* Arkin is based at Georgia Tech, and the research is funded by the U.S. Army Research Office.
Tuesday, July 22, 2014
Faint possibilities
I have identified a possible flaw in a thought-experiment-based argument I posted here ('Kirk dies') some time ago. I wrote:
"We willingly go to sleep at night. We willingly get anesthetized for an operation. We might also be happy to go into 'cold storage' for a long space journey or to survive a devastating catastrophe on earth (a 'nuclear winter', for example).
But, what if, though we could be certain the hibernation device would not fail to keep our body alive and in a resuscitatable state, we just did not know whether or not it would ever get around to waking us up?
Going into such a device becomes exactly equivalent to a game of Russian roulette. Death (as in the death of the body) is functionally equivalent to not waking up, ever. All the death of the body does is make it impossible ever to wake up. It takes away hope.
But, from the point of view of the unconscious person, hope – or any kind of expectation – is irrelevant. So the experience of death is equivalent to the experience of going into a state of unconsciousness – nothing more."
The problem (as I now see it) is that I was overlooking the fact that a body which is revivable must, like any living thing, be in some sense sentient.*
A person requires consciousness as well as a social context, etc., and so is much more than a sentient body or body components. But the latter is the basis and sine qua non of the former and – crucially – is what makes me me and not you.**
The higher functions of my brain (or my 'mind') fade in and out, and may go permanently before my body fails. Once they are gone, I (as a functioning person) am gone. But in another sense I live on in my separateness until the death of the body.
In a fundamental sense, basic sentience is a far more interesting phenomenon than consciousness or self-consciousness, as it is the root of the latter. If there is a deep mystery in the universe, sentience is it. The primitive organism in a rock pool attracted to the warmth of the sun: this is what is most remarkable.
A human body, then, in some imagined technology-induced kind of super-hibernation would be unconscious but sentient (probably in many ways). And such a 'sleep' – even if extended indefinitely – is not at all equivalent to death.
I thought I had demonstrated (assuming that a functioning human body is entirely physical, i.e. soulless) that there can be no afterlife, no waking up (as it were) after death.*** But my little thought experiment was, I think, fatally flawed.
All sorts of possibilities – especially in an infinite universe (or multiverse) with multiple copies and so on – remain in place (faint though these possibilities may be).
* I am assuming here that any process which completely shuts down the functioning of organs, cells etc. would not be reversible: it would in effect kill the body.
** I know there are difficult questions about what it is exactly that confers identity on an individual or what identity consists of or in which I'm skirting here, but (if one rejects a Cartesian view) the living body is clearly basic.
*** Looking at things from a first-person perspective helps to keep these sorts of discussions grounded. Look, for example, at the sort of question that a dying person might ask him/herself. Something like: "Is this the end, the end of all experience (for me)?" I put the 'for me' in brackets because the burden of the question lies elsewhere. The dying person is not interested in me-ness but in whether or not there are going to be any more experiences. One can imagine – looking forward – waking up as oneself at another (say, earlier) stage of life, as someone else entirely or even as a giant cockroach – and I think these imaginings are at least coherent. (Kafka's giant bug had psychological continuity with the man who suffered the transformation. My cockroach doesn't (necessarily), and nor does my someone else. Thus the inserted clause, 'looking forward'.)
"We willingly go to sleep at night. We willingly get anesthetized for an operation. We might also be happy to go into 'cold storage' for a long space journey or to survive a devastating catastrophe on earth (a 'nuclear winter', for example).
But, what if, though we could be certain the hibernation device would not fail to keep our body alive and in a resuscitatable state, we just did not know whether or not it would ever get around to waking us up?
Going into such a device becomes exactly equivalent to a game of Russian roulette. Death (as in the death of the body) is functionally equivalent to not waking up, ever. All the death of the body does is make it impossible ever to wake up. It takes away hope.
But, from the point of view of the unconscious person, hope – or any kind of expectation – is irrelevant. So the experience of death is equivalent to the experience of going into a state of unconsciousness – nothing more."
The problem (as I now see it) is that I was overlooking the fact that a body which is revivable must, like any living thing, be in some sense sentient.*
A person requires consciousness as well as a social context, etc., and so is much more than a sentient body or body components. But the latter is the basis and sine qua non of the former and – crucially – is what makes me me and not you.**
The higher functions of my brain (or my 'mind') fade in and out, and may go permanently before my body fails. Once they are gone, I (as a functioning person) am gone. But in another sense I live on in my separateness until the death of the body.
In a fundamental sense, basic sentience is a far more interesting phenomenon than consciousness or self-consciousness, as it is the root of the latter. If there is a deep mystery in the universe, sentience is it. The primitive organism in a rock pool attracted to the warmth of the sun: this is what is most remarkable.
A human body, then, in some imagined technology-induced kind of super-hibernation would be unconscious but sentient (probably in many ways). And such a 'sleep' – even if extended indefinitely – is not at all equivalent to death.
I thought I had demonstrated (assuming that a functioning human body is entirely physical, i.e. soulless) that there can be no afterlife, no waking up (as it were) after death.*** But my little thought experiment was, I think, fatally flawed.
All sorts of possibilities – especially in an infinite universe (or multiverse) with multiple copies and so on – remain in place (faint though these possibilities may be).
* I am assuming here that any process which completely shuts down the functioning of organs, cells etc. would not be reversible: it would in effect kill the body.
** I know there are difficult questions about what it is exactly that confers identity on an individual or what identity consists of or in which I'm skirting here, but (if one rejects a Cartesian view) the living body is clearly basic.
*** Looking at things from a first-person perspective helps to keep these sorts of discussions grounded. Look, for example, at the sort of question that a dying person might ask him/herself. Something like: "Is this the end, the end of all experience (for me)?" I put the 'for me' in brackets because the burden of the question lies elsewhere. The dying person is not interested in me-ness but in whether or not there are going to be any more experiences. One can imagine – looking forward – waking up as oneself at another (say, earlier) stage of life, as someone else entirely or even as a giant cockroach – and I think these imaginings are at least coherent. (Kafka's giant bug had psychological continuity with the man who suffered the transformation. My cockroach doesn't (necessarily), and nor does my someone else. Thus the inserted clause, 'looking forward'.)
Tuesday, June 17, 2014
On modern human origins and the emergence of complex language
Though much about the movements, migrations, interactions and material culture of early modern humans remains uncertain, rapid progress is being made by researchers. Questions concerning the non-material culture of our ancient ancestors, however and, in particular, concerning their languages or modes of language-like communication are far more problematic. What follows are a few reflections on what, in general terms, we know, and what the prospects might be for learning more.
Our ultimate African origins are not in dispute, but there are still fundamental disagreements between supporters of models which see modern humans as having migrated (more or less recently) to other continents, replacing other hominins in the process, and supporters of versions of a multiregional hypothesis who see the evolution of modern humans from earlier forms not just as an African but as a worldwide phenomenon involving significant interbreeding between different kinds of hominin, complex gene-flows and a number of regional continuities dating back at least 200,000 years.
Despite these disagreements it is, I think, becoming increasingly clear that the recent African origin model, the view that modern humans arose as a new species in Africa and migrated to other continents around 60,000 years ago, replacing existing human species in the process, is at best an oversimplification. For there is now firm genetic evidence that interbreeding occurred between modern humans and Neanderthals in Europe and between modern humans and Denisovans in Asia, as well as evidence that migrations of modern humans occurred more than 100,000 years ago. New versions of the 'out of Africa' model – which push back the dates of migrations and take into account interbreeding between different human groups – bring it closer to a multiregional model, though any consensus is still a long way off.
A recent University of Tübingen research project exemplifies how the African origin model is changing. The study focuses on modern humans who migrated east via the Arabian peninsula area where stone tools dating from more than 120,000 years ago have been found. Two significant migrations – a very early one (ca. 130,000 years ago) along the southern coast of the Arabian peninsula and a much later one via a northern route – were hypothesized, and the researchers' models predict in general terms the actual data (skull measurements and genetic data) of population groups currently living in the Asia-Pacific region.
According to Hugo Reyes-Centeno, a leading member of the Tübingen team, Aboriginal Australians, Papuans and Melanesians were "relatively isolated after dispersal along the southern route" and other Asian populations were largely descended from groups migrating much later (about 50,000 years ago) along other routes, the main one going via the north of the Arabian peninsula and northern Eurasia.
These results need to be treated with caution, however, as the data on which the models are based are necessarily extremely limited and incomplete. The results need also to be integrated with other data, including, for example, findings which indicate that Denisovans, who were widespread in Asia during the Late Pleistocene, contributed 4-6% their genetic material to present-day Melanesians.
The Denisovans were named after a cave in southern Siberia where a finger bone fragment from which DNA was able to be extracted was discovered. Geneticists have now managed to sequence the entire Denisovan genome to a high degree of accuracy.
Though closely related to Neanderthals, Denisovans seem to have interbred with an unidentified species and picked up some of their DNA. "Denisovans," claims David Reich of the Harvard Medical School in Boston, "harbour ancestry from an unknown archaic population, unrelated to Neanderthals." One possibility is that these scattered DNA fragments (which constitute only about 1% of the Denisovan genome) derive from H. heidelbergensis who lived in Europe and western Asia between about 600,000 and 250,000 years ago. Another possibility for the source of the archaic genes is Homo erectus.
While new archaeological and genetic evidence about the early history of humanity continues to accumulate and the broad outlines of a plausible story are beginning to fall into place, making progress in understanding linguistic (and many other cultural) factors will be difficult. Sure, archaeological findings may throw some light on questions concerning where and when complex languages first developed amongst human populations and also on the vexed question of whether Neanderthals used complex languages. For example, there is strong archaeological evidence that behavioral and cultural changes occurred amongst modern humans about 50,000 years ago, and this may well suggest that it was at about this time that human languages similar in structure and more or less equivalent in complexity to languages spoken today first appeared. Also, evidence of subtle genetic changes – relating to the FOXP2 gene, for example – may yield clues about which populations were capable of complex language and which were not.
Theories of culture-driven gene evolution tend to support the idea that humans developed language in a piecemeal but not necessarily always gradual process. The basic notion is that the existence of some form of primitive spoken language (without complex syntax or an extensive lexicon) may have created a cultural environment in which certain small genetic changes – e.g. in the FOXP2 gene which is important for (amongst other things) the fine motor control of vocalizations – could have had huge evolutionary advantages and so spread rapidly, prompting further cultural changes which in turn would have facilitated further genetic change, and so on.
But, in the final analysis, anatomical, genetic and broader archaeological findings will only ever be able to answer very general questions about culture and language (and only to a certain degree of probability) and it is difficult to see how specific questions concerning the nature of very early languages, or questions concerning to what extent particular groups such as Neanderthals or Denisovans developed their own languages or adopted (modified versions of?) the languages developed by modern humans, could move beyond the realm of speculation.
So, even if it could be argued convincingly on the basis of archaeological and genetic evidence that a certain population (modern or Neanderthal) at a certain time was extremely likely to have used a language of comparable complexity to today's human languages, the content of such a claim would necessarily be rather thin – and indeed linguistically vacuous – if any knowledge of the nature of that language is (and must remain) inaccessible to us.
In a recent post I referred to claims made by researchers at the Max Planck Institute that linguistic contacts with Neanderthals may have left discernible traces in the structure of non-African languages. However, given the time-frames involved and the fact that we only have access – and will only ever have access – to a minuscule fragment of the relevant linguistic data, it seems highly unlikely that even the most sophisticated computational approaches will be of much use. The researchers' claims are intriguing but, I would say, far too optimistic about what the sorts of approaches they are proposing could actually achieve.
It needs to be borne in mind that the earliest true writing systems for natural languages for which we have evidence date only from the 3rd millennium BC. Educated guesses and speculations about Proto-Indo-European, the hypothesized language from which the Indo-European language family (which includes Sanskrit, Persian, Latin, Greek and the Romance, Germanic and Slavic languages) is seen to derive, take us further back, but only to about 4000 BC.
Can knowledge of the languages spoken in recent times by preliterate peoples take us further back? Probably not. Though many of these languages have been recorded and analyzed, it would be a mistake to assume, even if the associated material cultures have been relatively stable for (in some cases) tens of thousands of years, that the languages themselves have exhibited anything like a similar stability.
The history of the human languages for which we have no historical written records (usually because there was no writing system but sometimes because written records have survived only in a fragmentary state or not at all) can only be hypothesized, largely on the basis of the elaborate comparative methods devised by philologists in the 19th and early-20th centuries coupled with general speculations about the speed and nature of linguistic change and its relation to broader social and cultural changes.
So it seems clear that, while broad evolutionary developments and migrations may eventually be able to be mapped with a high degree of confidence, the cultures of our ancient, preliterate forebears will only ever be able to be characterized in very general terms. Gaining substantive knowledge of the content of their cultures and belief systems, as of the actual (as distinct from the possible) nature and structures of the languages upon which these cultures were built and depended, lies forever beyond our grasp. The evidence just isn't there.
Our ultimate African origins are not in dispute, but there are still fundamental disagreements between supporters of models which see modern humans as having migrated (more or less recently) to other continents, replacing other hominins in the process, and supporters of versions of a multiregional hypothesis who see the evolution of modern humans from earlier forms not just as an African but as a worldwide phenomenon involving significant interbreeding between different kinds of hominin, complex gene-flows and a number of regional continuities dating back at least 200,000 years.
Despite these disagreements it is, I think, becoming increasingly clear that the recent African origin model, the view that modern humans arose as a new species in Africa and migrated to other continents around 60,000 years ago, replacing existing human species in the process, is at best an oversimplification. For there is now firm genetic evidence that interbreeding occurred between modern humans and Neanderthals in Europe and between modern humans and Denisovans in Asia, as well as evidence that migrations of modern humans occurred more than 100,000 years ago. New versions of the 'out of Africa' model – which push back the dates of migrations and take into account interbreeding between different human groups – bring it closer to a multiregional model, though any consensus is still a long way off.
A recent University of Tübingen research project exemplifies how the African origin model is changing. The study focuses on modern humans who migrated east via the Arabian peninsula area where stone tools dating from more than 120,000 years ago have been found. Two significant migrations – a very early one (ca. 130,000 years ago) along the southern coast of the Arabian peninsula and a much later one via a northern route – were hypothesized, and the researchers' models predict in general terms the actual data (skull measurements and genetic data) of population groups currently living in the Asia-Pacific region.
According to Hugo Reyes-Centeno, a leading member of the Tübingen team, Aboriginal Australians, Papuans and Melanesians were "relatively isolated after dispersal along the southern route" and other Asian populations were largely descended from groups migrating much later (about 50,000 years ago) along other routes, the main one going via the north of the Arabian peninsula and northern Eurasia.
These results need to be treated with caution, however, as the data on which the models are based are necessarily extremely limited and incomplete. The results need also to be integrated with other data, including, for example, findings which indicate that Denisovans, who were widespread in Asia during the Late Pleistocene, contributed 4-6% their genetic material to present-day Melanesians.
The Denisovans were named after a cave in southern Siberia where a finger bone fragment from which DNA was able to be extracted was discovered. Geneticists have now managed to sequence the entire Denisovan genome to a high degree of accuracy.
Though closely related to Neanderthals, Denisovans seem to have interbred with an unidentified species and picked up some of their DNA. "Denisovans," claims David Reich of the Harvard Medical School in Boston, "harbour ancestry from an unknown archaic population, unrelated to Neanderthals." One possibility is that these scattered DNA fragments (which constitute only about 1% of the Denisovan genome) derive from H. heidelbergensis who lived in Europe and western Asia between about 600,000 and 250,000 years ago. Another possibility for the source of the archaic genes is Homo erectus.
While new archaeological and genetic evidence about the early history of humanity continues to accumulate and the broad outlines of a plausible story are beginning to fall into place, making progress in understanding linguistic (and many other cultural) factors will be difficult. Sure, archaeological findings may throw some light on questions concerning where and when complex languages first developed amongst human populations and also on the vexed question of whether Neanderthals used complex languages. For example, there is strong archaeological evidence that behavioral and cultural changes occurred amongst modern humans about 50,000 years ago, and this may well suggest that it was at about this time that human languages similar in structure and more or less equivalent in complexity to languages spoken today first appeared. Also, evidence of subtle genetic changes – relating to the FOXP2 gene, for example – may yield clues about which populations were capable of complex language and which were not.
Theories of culture-driven gene evolution tend to support the idea that humans developed language in a piecemeal but not necessarily always gradual process. The basic notion is that the existence of some form of primitive spoken language (without complex syntax or an extensive lexicon) may have created a cultural environment in which certain small genetic changes – e.g. in the FOXP2 gene which is important for (amongst other things) the fine motor control of vocalizations – could have had huge evolutionary advantages and so spread rapidly, prompting further cultural changes which in turn would have facilitated further genetic change, and so on.
But, in the final analysis, anatomical, genetic and broader archaeological findings will only ever be able to answer very general questions about culture and language (and only to a certain degree of probability) and it is difficult to see how specific questions concerning the nature of very early languages, or questions concerning to what extent particular groups such as Neanderthals or Denisovans developed their own languages or adopted (modified versions of?) the languages developed by modern humans, could move beyond the realm of speculation.
So, even if it could be argued convincingly on the basis of archaeological and genetic evidence that a certain population (modern or Neanderthal) at a certain time was extremely likely to have used a language of comparable complexity to today's human languages, the content of such a claim would necessarily be rather thin – and indeed linguistically vacuous – if any knowledge of the nature of that language is (and must remain) inaccessible to us.
In a recent post I referred to claims made by researchers at the Max Planck Institute that linguistic contacts with Neanderthals may have left discernible traces in the structure of non-African languages. However, given the time-frames involved and the fact that we only have access – and will only ever have access – to a minuscule fragment of the relevant linguistic data, it seems highly unlikely that even the most sophisticated computational approaches will be of much use. The researchers' claims are intriguing but, I would say, far too optimistic about what the sorts of approaches they are proposing could actually achieve.
It needs to be borne in mind that the earliest true writing systems for natural languages for which we have evidence date only from the 3rd millennium BC. Educated guesses and speculations about Proto-Indo-European, the hypothesized language from which the Indo-European language family (which includes Sanskrit, Persian, Latin, Greek and the Romance, Germanic and Slavic languages) is seen to derive, take us further back, but only to about 4000 BC.
Can knowledge of the languages spoken in recent times by preliterate peoples take us further back? Probably not. Though many of these languages have been recorded and analyzed, it would be a mistake to assume, even if the associated material cultures have been relatively stable for (in some cases) tens of thousands of years, that the languages themselves have exhibited anything like a similar stability.
The history of the human languages for which we have no historical written records (usually because there was no writing system but sometimes because written records have survived only in a fragmentary state or not at all) can only be hypothesized, largely on the basis of the elaborate comparative methods devised by philologists in the 19th and early-20th centuries coupled with general speculations about the speed and nature of linguistic change and its relation to broader social and cultural changes.
So it seems clear that, while broad evolutionary developments and migrations may eventually be able to be mapped with a high degree of confidence, the cultures of our ancient, preliterate forebears will only ever be able to be characterized in very general terms. Gaining substantive knowledge of the content of their cultures and belief systems, as of the actual (as distinct from the possible) nature and structures of the languages upon which these cultures were built and depended, lies forever beyond our grasp. The evidence just isn't there.
Monday, June 2, 2014
Neanderthal language
There has been a lot of talk in recent years about the intellectual and linguistic capacities of Neanderthals. This brief overview published last year in the research news section of the website of the Max Planck Institute reflects the currently popular view that their capacities were more or less equivalent to ours.
More controversial is the claim that the origins of modern language date not from about fifty or a hundred thousand years ago but from about one million years ago, "somewhere between the origins of our genus, Homo, some 1.8 million years ago and the emergence of Homo heidelbergensis." [Homo heidelbergensis is thought to represent our most recent common ancestor with Neanderthals, the split occurring about 500,000 years ago.]
More controversial still are claims being made by researchers at the Institute – and publicized in a recent New Scientist article [paywall] which is big on speculation but largely devoid of substantive content – that the cultural interactions between modern humans and their Neanderthal cousins included linguistic exchanges which left discernible traces in the syntax of non-African languages. Possible subtle structural differences between African and non-African languages coupled with detailed computer simulations of language spread would supposedly reveal something about the structural properties of hypothetical Neanderthal languages (which hypothetically impacted on non-African languages only). This is drawing a very long bow.
In a few days I will post some notes and reflections on some broader questions about early humans and the prospects for making progress in understanding their cultures and the nature of their languages.
More controversial is the claim that the origins of modern language date not from about fifty or a hundred thousand years ago but from about one million years ago, "somewhere between the origins of our genus, Homo, some 1.8 million years ago and the emergence of Homo heidelbergensis." [Homo heidelbergensis is thought to represent our most recent common ancestor with Neanderthals, the split occurring about 500,000 years ago.]
More controversial still are claims being made by researchers at the Institute – and publicized in a recent New Scientist article [paywall] which is big on speculation but largely devoid of substantive content – that the cultural interactions between modern humans and their Neanderthal cousins included linguistic exchanges which left discernible traces in the syntax of non-African languages. Possible subtle structural differences between African and non-African languages coupled with detailed computer simulations of language spread would supposedly reveal something about the structural properties of hypothetical Neanderthal languages (which hypothetically impacted on non-African languages only). This is drawing a very long bow.
In a few days I will post some notes and reflections on some broader questions about early humans and the prospects for making progress in understanding their cultures and the nature of their languages.
Tuesday, April 15, 2014
A spectrum of sorts
General talk about views of the world can be very frustrating and unproductive. But reading this piece about the incompatibility between science and most forms of religion (and particularly the associated comment thread with its predictably divergent views) has prompted me to make a few general observations of my own.
The problem is not just that words like 'religion' are vague, but also that more technical terms like 'physicalism', 'naturalism', 'idealism', 'empiricism' and 'rationalism' are also understood in different ways by different people. Countless scholarly articles have been written defining, redefining, defending or attacking particular positions. I may have another look at some of this literature soon, if only to review and refine the terms I use to define my own stance.
But I think the issues that really matter can be set out fairly simply in the form of a continuum. Such a basic, one-dimensional picture cannot, of course, begin to cover all angles or possibilities but it does allow one to represent in a plausible and useful way some of the most important differences in the way people see the world.
At one end of the continuum you have people who don't see any justification for believing in the existence of anything other than the sorts of things with which science (and mathematics) is – at least potentially – equipped to deal, whether one is thinking of the fundamental structures and processes addressed by physics or the more complex structures and processes dealt with by other areas of science.
What people at this end of the continuum reject is the notion that in addition to the reality (or realities) studied by the sciences (including the social sciences) there is some other reality not amenable to science which impinges on our lives. Like a spiritual realm, or a transcendent moral realm, or some form of 'destiny'. The crucial issue here is that scientific approaches do not reveal behind the phenomena of the natural world (or in fact appear to reveal the absence of) any underlying purpose or goal or enveloping moral reality.
At the opposite end of the continuum you have people who embrace a view of the world which purports to go beyond the science and which incorporates spiritual or supernatural or teleological or transcendently moral elements.
At the extreme are believers in spiritual or supernatural forces which can override normal physical laws. Most well-educated religious people today, however, accept that the physical world operates as described by science and that the spiritual or supernatural realm with which their religious beliefs are concerned is – must be – quite compatible with scientific reality. Such sophisticated believers could be seen as embracing both naturalism and (a subtle form of) supernaturalism. Or, looked at another way, a natural world which is embedded in a broader, all-encompassing reality.
More towards the centre of the spectrum are those who claim to reject all forms of supernaturalism but who also reject the hardline scientific view as narrow and impoverished. Advocates of process theology (or process philosophy) come to mind in this connection, but, though they claim to reject supernaturalism and embrace naturalism, theirs is a form of naturalism which goes well beyond the usual understanding of the term.
Ordinary agnostics, who are prepared neither definitively to embrace nor to reject spiritual possibilities, would also find themselves somewhere in the centre.
The central part of the spectrum is admittedly a very ill-defined and perhaps unstable area. It is characterized more by what the individuals involved don't accept than what they do, and I tend to want to interpret their positions as at least tending one way or the other. Process thinkers, for example, for all their explicit rejection of supernaturalism, clearly tend to the religious end of the spectrum. Others, who might maintain links with religious rituals for merely social or cultural reasons for example, tend in the opposite direction, as their actual beliefs may not differ much at all from those who explicitly embrace a hardline, science-oriented view.
On a related matter, it can be argued (on historical, sociological and logical grounds) that philosophy and religion are intimately linked and, though I won't elaborate on that idea here, I think it's worth remarking that a large (and, in America at least, increasing) number of philosophers are not only anti-scientistic but also religious.
Ludwig Wittgenstein was a prominent and interesting example, not least because of the huge influence he has exerted and continues to exert. He kept his religious orientation pretty much to himself. But it was there – and it clearly motivated his philosophical thinking.
As well as his private notebooks, we have detailed accounts by a number of Wittgenstein's friends to support the view that he had strong religious tendencies and commitments. Patrick Drury's recollections are particularly important, and Norman Malcolm (another close friend) explained Wittgenstein's vehement rejection of scientism in terms of his religious orientation.
Henry Le Roy Finch has made the point that Wittgenstein was throughout his life a supernaturalist in the mould of Pascal and Dostoievsky. As well as explaining the tenor of his thinking in many areas, this religious orientation also led – more than any other single factor – to his falling out with Bertrand Russell. The gulf between their basic outlooks was just too great.
This view accords well also with that of Ray Monk who has written intellectual biographies of both men, and who, in a lecture I heard him give some years ago, emphasized not only the absolute contrast and utter incompatibility between Russell's secular outlook and Wittgenstein's essentially religious view of the world, but also the way their respective views permeated their philosophical thinking. (Monk identifies very strongly with Wittgenstein's general outlook – and does not hide his distaste for Russell's.)
The problem is not just that words like 'religion' are vague, but also that more technical terms like 'physicalism', 'naturalism', 'idealism', 'empiricism' and 'rationalism' are also understood in different ways by different people. Countless scholarly articles have been written defining, redefining, defending or attacking particular positions. I may have another look at some of this literature soon, if only to review and refine the terms I use to define my own stance.
But I think the issues that really matter can be set out fairly simply in the form of a continuum. Such a basic, one-dimensional picture cannot, of course, begin to cover all angles or possibilities but it does allow one to represent in a plausible and useful way some of the most important differences in the way people see the world.
At one end of the continuum you have people who don't see any justification for believing in the existence of anything other than the sorts of things with which science (and mathematics) is – at least potentially – equipped to deal, whether one is thinking of the fundamental structures and processes addressed by physics or the more complex structures and processes dealt with by other areas of science.
What people at this end of the continuum reject is the notion that in addition to the reality (or realities) studied by the sciences (including the social sciences) there is some other reality not amenable to science which impinges on our lives. Like a spiritual realm, or a transcendent moral realm, or some form of 'destiny'. The crucial issue here is that scientific approaches do not reveal behind the phenomena of the natural world (or in fact appear to reveal the absence of) any underlying purpose or goal or enveloping moral reality.
At the opposite end of the continuum you have people who embrace a view of the world which purports to go beyond the science and which incorporates spiritual or supernatural or teleological or transcendently moral elements.
At the extreme are believers in spiritual or supernatural forces which can override normal physical laws. Most well-educated religious people today, however, accept that the physical world operates as described by science and that the spiritual or supernatural realm with which their religious beliefs are concerned is – must be – quite compatible with scientific reality. Such sophisticated believers could be seen as embracing both naturalism and (a subtle form of) supernaturalism. Or, looked at another way, a natural world which is embedded in a broader, all-encompassing reality.
More towards the centre of the spectrum are those who claim to reject all forms of supernaturalism but who also reject the hardline scientific view as narrow and impoverished. Advocates of process theology (or process philosophy) come to mind in this connection, but, though they claim to reject supernaturalism and embrace naturalism, theirs is a form of naturalism which goes well beyond the usual understanding of the term.
Ordinary agnostics, who are prepared neither definitively to embrace nor to reject spiritual possibilities, would also find themselves somewhere in the centre.
The central part of the spectrum is admittedly a very ill-defined and perhaps unstable area. It is characterized more by what the individuals involved don't accept than what they do, and I tend to want to interpret their positions as at least tending one way or the other. Process thinkers, for example, for all their explicit rejection of supernaturalism, clearly tend to the religious end of the spectrum. Others, who might maintain links with religious rituals for merely social or cultural reasons for example, tend in the opposite direction, as their actual beliefs may not differ much at all from those who explicitly embrace a hardline, science-oriented view.
On a related matter, it can be argued (on historical, sociological and logical grounds) that philosophy and religion are intimately linked and, though I won't elaborate on that idea here, I think it's worth remarking that a large (and, in America at least, increasing) number of philosophers are not only anti-scientistic but also religious.
Ludwig Wittgenstein was a prominent and interesting example, not least because of the huge influence he has exerted and continues to exert. He kept his religious orientation pretty much to himself. But it was there – and it clearly motivated his philosophical thinking.
As well as his private notebooks, we have detailed accounts by a number of Wittgenstein's friends to support the view that he had strong religious tendencies and commitments. Patrick Drury's recollections are particularly important, and Norman Malcolm (another close friend) explained Wittgenstein's vehement rejection of scientism in terms of his religious orientation.
Henry Le Roy Finch has made the point that Wittgenstein was throughout his life a supernaturalist in the mould of Pascal and Dostoievsky. As well as explaining the tenor of his thinking in many areas, this religious orientation also led – more than any other single factor – to his falling out with Bertrand Russell. The gulf between their basic outlooks was just too great.
This view accords well also with that of Ray Monk who has written intellectual biographies of both men, and who, in a lecture I heard him give some years ago, emphasized not only the absolute contrast and utter incompatibility between Russell's secular outlook and Wittgenstein's essentially religious view of the world, but also the way their respective views permeated their philosophical thinking. (Monk identifies very strongly with Wittgenstein's general outlook – and does not hide his distaste for Russell's.)
Thursday, March 20, 2014
Science as a way of seeing
Attitudes to science and attitudes to language are often related. Many science-oriented people are 'linguistic revisionists'. They have a low opinion of ordinary language (because of its vagueness and ambiguity) and seek to reform it or replace it wherever possible with various formalisms. Conversely, a negative attitude to science and mathematics and logic is often evident amongst lovers and respecters of natural language (especially in literary circles for example).
But there is no reason why one cannot combine a passionate commitment to a scientific (even scientistic) view of the world with a profound respect for natural languages – these curious products of biological and cultural evolution – as objects or systems and with a recognition of what these systems are uniquely equipped to do.
To complicate matters, it's also possible to combine a commitment to the formal sciences with a passionate hatred for the physical sciences. This is a not uncommon position, actually, but one I will not deal with here.
What follows, then, are some preliminary and loosely connected notes on the differences between broadly scientific and other modes of thinking, seen in relation to language.
Reasoning and deduction can, of course, be framed in formal terms, and even natural languages can, to an extent, be seen as interpreted formal systems.
Such formal logical approaches – which don't come naturally to most of us – represent a limited but (paradoxically) revealing perspective, rather like an X-ray image, or a monochrome drawing (a landscape, say).
They have their own beauty, these approaches, but it is a spare beauty which derives from abstraction, from leaving things out – like soft tissue in the case of the X-ray, or colour and smell and sound and movement and a third spacial dimension in the case of the drawing.
Revealing and beautiful – and also useful. It was this mode of thinking that gave rise to mathematics, science and technology. And, in the mid-20th century, habits of abstract and reflexive thought finally brought formal systems themselves to life in the form of the digital computer.
But computers, as embodiments of formal thinking, suffer the limitations of formal thinking, and are not well-equipped to deal with the rich parallelism of human perceptions or the tacit knowledge implicit in ordinary human actions and interactions and language use. Their strengths are our weaknesses and their weaknesses our strengths.
What is most notable about normal human brains – in stark contrast to machine intelligence – is their remarkable ability to deal with non-abstract things, and, in particular, with the hugely complex sensory and social realms; in conjunction, of course, with natural language, the bedrock of social life and culture.
Human languages are in fact quite remarkable in their capacity for expressing the subtleties of psychological and social experience. I don't much like the word 'literature'; it's a bit stuffy and pretentious but it's the only word we've got in English that picks out and honors, as it were, texts which explore and exploit this capacity.
The word 'letters' worked in compound expressions in the relatively recent past ('life and letters', 'man of letters') but is now quite archaic. 'Belles lettres' even more so.
The adjective 'literary' is, however, neither pretentious nor archaic, simply descriptive. It can be a neutral indicator of a specific context of language use. Or it can be used to designate (often pejoratively, it must be said) a particular style or register of language use (in contrast to technical or plain or straightforward or colloquial language, for example).
In the early 20th century, the linguist (and one-time student of Ferdinand de Saussure) Charles Bally saw the need to expand the scientific study of language to encompass the subjective and aesthetic elements involved in personal expression. His notion of stylistics was further developed by thinkers associated with the Prague school – most notably Roman Jakobson, who listed the 'poetic function' as one of the six general functions of language.
[I am always wary when scholars make numbered lists of this kind (suspecting that reality is rather less amenable to clearcut categorization than the scholars would wish).
Though his overview of linguistic functions is harmless enough, Jakobson did in fact have a tendency to drive his more technical insights too hard and too far. On markedness and binarism, for instance. But that's another story.]
On the question of the possibility of a satisfactorily scientific study of style I am undecided.
Certainly, the importance of stylistic elements in actual human communication is often underestimated and communication failures are often the result of stylistic rather than substantive issues. The aesthetic element is also important in its own right (as Jakobson saw).
But scientific approaches are characterized by their narrow focus and abstractness: by what they leave out. And what they leave out is generally the subjective or experiential side of things. Twentieth century phenomenologists and others tried – and failed – to reinsert into the scientific view what had been omitted.
A supposedly 'scientific' approach (phenomenological or otherwise) could never really replace, as I see it, the informal 'close reading' of a text or spoken exchange (for example) by a perceptive reader or listener who was well versed in the language and culture (or sub-culture) in question.
Was a particular characterization plausible or a given piece of dialogue convincing? Was a particular remark witty or just sarcastic or rude? Was someone being condescending in what she said, or kind (or both condescending and kind)?
Often the answers to such questions will depend not only on non-verbal and para-linguistic factors but also on the subtle connotation of a word or turn of phrase.
Logical languages (like the predicate calculus) strip these psychological and emotional and aesthetic elements away; and all scientific language – even in the social sciences – aspires to denotation, pure and simple.
As I started out by saying, that spare, direct approach has its own beauty which stems above all from its power to make us see in a more direct and culturally unencumbered way.
You can interpret the scientific way of seeing things (which goes beyond science as such) in an almost mystical way, in fact: as a means of 'cleansing the doors of perception', of temporarily sloughing off the necessary – and necessarily arbitrary – cultural baggage of social existence.
But there is no reason why one cannot combine a passionate commitment to a scientific (even scientistic) view of the world with a profound respect for natural languages – these curious products of biological and cultural evolution – as objects or systems and with a recognition of what these systems are uniquely equipped to do.
To complicate matters, it's also possible to combine a commitment to the formal sciences with a passionate hatred for the physical sciences. This is a not uncommon position, actually, but one I will not deal with here.
What follows, then, are some preliminary and loosely connected notes on the differences between broadly scientific and other modes of thinking, seen in relation to language.
Reasoning and deduction can, of course, be framed in formal terms, and even natural languages can, to an extent, be seen as interpreted formal systems.
Such formal logical approaches – which don't come naturally to most of us – represent a limited but (paradoxically) revealing perspective, rather like an X-ray image, or a monochrome drawing (a landscape, say).
They have their own beauty, these approaches, but it is a spare beauty which derives from abstraction, from leaving things out – like soft tissue in the case of the X-ray, or colour and smell and sound and movement and a third spacial dimension in the case of the drawing.
Revealing and beautiful – and also useful. It was this mode of thinking that gave rise to mathematics, science and technology. And, in the mid-20th century, habits of abstract and reflexive thought finally brought formal systems themselves to life in the form of the digital computer.
But computers, as embodiments of formal thinking, suffer the limitations of formal thinking, and are not well-equipped to deal with the rich parallelism of human perceptions or the tacit knowledge implicit in ordinary human actions and interactions and language use. Their strengths are our weaknesses and their weaknesses our strengths.
What is most notable about normal human brains – in stark contrast to machine intelligence – is their remarkable ability to deal with non-abstract things, and, in particular, with the hugely complex sensory and social realms; in conjunction, of course, with natural language, the bedrock of social life and culture.
Human languages are in fact quite remarkable in their capacity for expressing the subtleties of psychological and social experience. I don't much like the word 'literature'; it's a bit stuffy and pretentious but it's the only word we've got in English that picks out and honors, as it were, texts which explore and exploit this capacity.
The word 'letters' worked in compound expressions in the relatively recent past ('life and letters', 'man of letters') but is now quite archaic. 'Belles lettres' even more so.
The adjective 'literary' is, however, neither pretentious nor archaic, simply descriptive. It can be a neutral indicator of a specific context of language use. Or it can be used to designate (often pejoratively, it must be said) a particular style or register of language use (in contrast to technical or plain or straightforward or colloquial language, for example).
In the early 20th century, the linguist (and one-time student of Ferdinand de Saussure) Charles Bally saw the need to expand the scientific study of language to encompass the subjective and aesthetic elements involved in personal expression. His notion of stylistics was further developed by thinkers associated with the Prague school – most notably Roman Jakobson, who listed the 'poetic function' as one of the six general functions of language.
[I am always wary when scholars make numbered lists of this kind (suspecting that reality is rather less amenable to clearcut categorization than the scholars would wish).
Though his overview of linguistic functions is harmless enough, Jakobson did in fact have a tendency to drive his more technical insights too hard and too far. On markedness and binarism, for instance. But that's another story.]
On the question of the possibility of a satisfactorily scientific study of style I am undecided.
Certainly, the importance of stylistic elements in actual human communication is often underestimated and communication failures are often the result of stylistic rather than substantive issues. The aesthetic element is also important in its own right (as Jakobson saw).
But scientific approaches are characterized by their narrow focus and abstractness: by what they leave out. And what they leave out is generally the subjective or experiential side of things. Twentieth century phenomenologists and others tried – and failed – to reinsert into the scientific view what had been omitted.
A supposedly 'scientific' approach (phenomenological or otherwise) could never really replace, as I see it, the informal 'close reading' of a text or spoken exchange (for example) by a perceptive reader or listener who was well versed in the language and culture (or sub-culture) in question.
Was a particular characterization plausible or a given piece of dialogue convincing? Was a particular remark witty or just sarcastic or rude? Was someone being condescending in what she said, or kind (or both condescending and kind)?
Often the answers to such questions will depend not only on non-verbal and para-linguistic factors but also on the subtle connotation of a word or turn of phrase.
Logical languages (like the predicate calculus) strip these psychological and emotional and aesthetic elements away; and all scientific language – even in the social sciences – aspires to denotation, pure and simple.
As I started out by saying, that spare, direct approach has its own beauty which stems above all from its power to make us see in a more direct and culturally unencumbered way.
You can interpret the scientific way of seeing things (which goes beyond science as such) in an almost mystical way, in fact: as a means of 'cleansing the doors of perception', of temporarily sloughing off the necessary – and necessarily arbitrary – cultural baggage of social existence.
Sunday, February 23, 2014
Death and the sense of self
This is a postscript to some previous discussions on death, human identity and 'the phantom self'.
These issues are quite maddening because one feels they should be simple. But (certainly as philosophers like Derek Parfit present them) they don't seem so.
I have given my (provisional) views on all this, and one of my conclusions is that Parfit's suggestion that day-to-day survival is not what it seems, being virtually equivalent to dying and having an exact copy live on, is just wrong.
Sure, the notions of the self and identity are problematic, but our struggle for (bodily) survival is at the heart of things, surely. We know what it is to go into an unconscious state and subsequently wake up. And we can imagine – not waking up! (Foresee our own actual death, in other words.)
However, having had various private discussions on this matter, I recognize that some people see it differently from me and would be happy enough to have their bodies destroyed so long as an exact copy survived.
"But look at it from your point of view," I would say. "You go into the (transporter) machine, get scanned, lose consciousness, and that would be that. You wouldn't 'wake up' as the copy (or one of the copies if there were several). You wouldn't wake up at all. Ever. Whereas, of course, for other people 'you' would still be there. Your wife would not have lost her husband, etc.. But you would have lost your wife – and everything else."
"But this you you talk about, what is it? You speak as if it's a soul or an essence..."
Which I of course deny. But I see that my interlocutor just doesn't get what I am saying, and I begin to wonder if I am making sense.
People see these matters very differently, and I suspect that one of my interlocutors may have given an explanation of sorts when he said, "Some people just have a stronger sense of self than others."
Those with a stronger sense of self, then, would be less likely to identify that self with any copy, however exact.
You could also plausibly see a strong sense of self as being associated with a strong survival instinct (and/or egoism), and a weaker sense of self with a less-strong survival instinct. But the crucial question is: how does this translate into truth claims?
It could be that a weaker sense of self tends to obscure – or blur – the simple (and tragic) truth about death. Then again, perhaps a strong sense of self and survival instinct leads one to underestimate the equivocal and tenuous nature of the self.
The human self is a complex – and indeed tenuous – phenomenon, based as it is on cultural and social as well as biological factors. But tying its fate to the fate of the body does not entail identifying it exclusively with the body in any simple way. For the self depends on the body, even if it also depends on other things. And when the body fails, it fails.
A couple of final comments of a more general nature.
It seems clear that a straightforward scientific approach doesn't seem to work on these problems of death and identity just as it fails to work on other typical philosophical problems – like free will. Could this have something to do with self-reference?
The major paradoxes of logic are self-referential, and the problems being discussed here (and the free will problem also) have a self-referential element.
And though self-reference in logic doesn't relate to a human self but just to concepts turning back on themselves (like a set being a member of itself), there does seem to be a parallel that may help to explain the intractability of these sorts of questions.
The problems (or limitations) may, in other words, be logical as well as psychological (and so deeper).
Science aspires to an objective, third-person point of view or 'view from nowhere'. It is not undermined (though perhaps dogged at a fundamental level) by those self-referential logical paradoxes. And it can readily explain (albeit from a general, objective point of view) how first-person perspectives arise in nature – and much about them.
The first-person point of view is fine, in fact – until it starts to reflect on its own nature and make (science-like) claims about itself.
These issues are quite maddening because one feels they should be simple. But (certainly as philosophers like Derek Parfit present them) they don't seem so.
I have given my (provisional) views on all this, and one of my conclusions is that Parfit's suggestion that day-to-day survival is not what it seems, being virtually equivalent to dying and having an exact copy live on, is just wrong.
Sure, the notions of the self and identity are problematic, but our struggle for (bodily) survival is at the heart of things, surely. We know what it is to go into an unconscious state and subsequently wake up. And we can imagine – not waking up! (Foresee our own actual death, in other words.)
However, having had various private discussions on this matter, I recognize that some people see it differently from me and would be happy enough to have their bodies destroyed so long as an exact copy survived.
"But look at it from your point of view," I would say. "You go into the (transporter) machine, get scanned, lose consciousness, and that would be that. You wouldn't 'wake up' as the copy (or one of the copies if there were several). You wouldn't wake up at all. Ever. Whereas, of course, for other people 'you' would still be there. Your wife would not have lost her husband, etc.. But you would have lost your wife – and everything else."
"But this you you talk about, what is it? You speak as if it's a soul or an essence..."
Which I of course deny. But I see that my interlocutor just doesn't get what I am saying, and I begin to wonder if I am making sense.
People see these matters very differently, and I suspect that one of my interlocutors may have given an explanation of sorts when he said, "Some people just have a stronger sense of self than others."
Those with a stronger sense of self, then, would be less likely to identify that self with any copy, however exact.
You could also plausibly see a strong sense of self as being associated with a strong survival instinct (and/or egoism), and a weaker sense of self with a less-strong survival instinct. But the crucial question is: how does this translate into truth claims?
It could be that a weaker sense of self tends to obscure – or blur – the simple (and tragic) truth about death. Then again, perhaps a strong sense of self and survival instinct leads one to underestimate the equivocal and tenuous nature of the self.
The human self is a complex – and indeed tenuous – phenomenon, based as it is on cultural and social as well as biological factors. But tying its fate to the fate of the body does not entail identifying it exclusively with the body in any simple way. For the self depends on the body, even if it also depends on other things. And when the body fails, it fails.
A couple of final comments of a more general nature.
It seems clear that a straightforward scientific approach doesn't seem to work on these problems of death and identity just as it fails to work on other typical philosophical problems – like free will. Could this have something to do with self-reference?
The major paradoxes of logic are self-referential, and the problems being discussed here (and the free will problem also) have a self-referential element.
And though self-reference in logic doesn't relate to a human self but just to concepts turning back on themselves (like a set being a member of itself), there does seem to be a parallel that may help to explain the intractability of these sorts of questions.
The problems (or limitations) may, in other words, be logical as well as psychological (and so deeper).
Science aspires to an objective, third-person point of view or 'view from nowhere'. It is not undermined (though perhaps dogged at a fundamental level) by those self-referential logical paradoxes. And it can readily explain (albeit from a general, objective point of view) how first-person perspectives arise in nature – and much about them.
The first-person point of view is fine, in fact – until it starts to reflect on its own nature and make (science-like) claims about itself.
Tuesday, January 28, 2014
Nouny nouns
Most of us come up with ideas which we think are good but which we don't develop or exploit. Ideas for making money or doing good, or – as in the case I am about to describe – ideas which have absolutely no possible commercial or practical applications.
Typically, we discuss these bright ideas with trusted friends or family members and get discouraged when our interlocutors are less than overwhelmed.
So let me recycle here (to the extent that I can reconstruct it from memory) one such idea which was effectively discouraged by an old academic friend and colleague whose views on the matter I may have taken a shade too seriously. Or not, as the case may be.
It relates to the topic of animism, which I raised in my previous post on this site.
There I talked about the so-called 'mind projection fallacy' discussed by Edwin Thompson Jaynes. He talked about evidence in ancient literature and pointed out that the fallacy in question would have long pre-dated written records.
We have anthropological evidence for something like Jaynes's mind projection fallacy from studies of various non-literate cultures, but my idea was to look for evidence in the structure of language.
For our natural tendency to project human-like intelligence into non-living and non-human nature is obviously reflected in various ways in the grammar and morphology of the languages we speak or know about, and these languages (would have) not only reflect(ed) but also facilitate(d) animistic modes of thinking.
You find traces of animism even in modern English idioms such as 'the wind blows', but grammatical analysis of both verbal and nominal forms takes us much further back in time.
My intention was to focus on nouns. Willard Van Orman Quine speculated (in his Word and Object as I recall) that the most basic form of noun was the mass noun – like 'sand' – rather than the count noun – like 'hill'. The former doesn't need an article ('the' or 'a'), the latter does.
But, counter to Quine's speculations, it can in fact be demonstrated by looking at the potential for inflection – grammatical suffixes and so on – of various kinds of noun in a range of languages within the Indo-European family that the prototypical noun – the 'nounier' noun if you like – is the count noun rather than the mass noun; and, of the count nouns, animate nouns are nounier than inanimate nouns; and nouns relating to humans or human-like agents are the nouniest of all.
My intention, then, was to elaborate and refine and draw out the implications of this fact: that for many languages – including some of the oldest linguistic forms of which we have any knowledge – the nouniest nouns are personal agents.
Perhaps this idea had already been developed by others at the time I first thought of it. Perhaps it has been discussed and developed more recently. Perhaps it is just not an interesting enough idea to bother with. Or perhaps none of the above applies.
Wishing, then, to maintain – at least for a little while – a state of blissful ignorance on the matter, I am deliberately postponing any scholarly delving.
I have also refrained from mentioning the name of the linguist (now in his eighties) whose work was my jumping-off point. If his name comes up in my (or anyone else's) searching it will suggest that the territory is still relatively virgin.
Typically, we discuss these bright ideas with trusted friends or family members and get discouraged when our interlocutors are less than overwhelmed.
So let me recycle here (to the extent that I can reconstruct it from memory) one such idea which was effectively discouraged by an old academic friend and colleague whose views on the matter I may have taken a shade too seriously. Or not, as the case may be.
It relates to the topic of animism, which I raised in my previous post on this site.
There I talked about the so-called 'mind projection fallacy' discussed by Edwin Thompson Jaynes. He talked about evidence in ancient literature and pointed out that the fallacy in question would have long pre-dated written records.
We have anthropological evidence for something like Jaynes's mind projection fallacy from studies of various non-literate cultures, but my idea was to look for evidence in the structure of language.
For our natural tendency to project human-like intelligence into non-living and non-human nature is obviously reflected in various ways in the grammar and morphology of the languages we speak or know about, and these languages (would have) not only reflect(ed) but also facilitate(d) animistic modes of thinking.
You find traces of animism even in modern English idioms such as 'the wind blows', but grammatical analysis of both verbal and nominal forms takes us much further back in time.
My intention was to focus on nouns. Willard Van Orman Quine speculated (in his Word and Object as I recall) that the most basic form of noun was the mass noun – like 'sand' – rather than the count noun – like 'hill'. The former doesn't need an article ('the' or 'a'), the latter does.
But, counter to Quine's speculations, it can in fact be demonstrated by looking at the potential for inflection – grammatical suffixes and so on – of various kinds of noun in a range of languages within the Indo-European family that the prototypical noun – the 'nounier' noun if you like – is the count noun rather than the mass noun; and, of the count nouns, animate nouns are nounier than inanimate nouns; and nouns relating to humans or human-like agents are the nouniest of all.
My intention, then, was to elaborate and refine and draw out the implications of this fact: that for many languages – including some of the oldest linguistic forms of which we have any knowledge – the nouniest nouns are personal agents.
Perhaps this idea had already been developed by others at the time I first thought of it. Perhaps it has been discussed and developed more recently. Perhaps it is just not an interesting enough idea to bother with. Or perhaps none of the above applies.
Wishing, then, to maintain – at least for a little while – a state of blissful ignorance on the matter, I am deliberately postponing any scholarly delving.
I have also refrained from mentioning the name of the linguist (now in his eighties) whose work was my jumping-off point. If his name comes up in my (or anyone else's) searching it will suggest that the territory is still relatively virgin.
Sunday, January 12, 2014
Randomness in nature
I have talked before about randomness. Somehow it seems important to know whether the world we live in is driven in part by fundamentally random processes.
Some recent findings seem to confirm (though 'confirm' is probably too strong a word) what quantum theory has suggested all along: that there are basic physical processes which are truly random.
I might also mention in this context that, in doing a bit of reading on probability and related matters, I happened to come across some references to, and a paper by, the physicist Edwin Thompson Jaynes (1922-1998). Jaynes promoted the view that probability theory is an extension of logic.
This is intuitively plausible. The concept of truth (and truth tables) lies at the heart of propositional logic, and T is, of course, equivalent to a probability of 1, F to a probability of 0. Probability theory just fills in the bits in between in a quantitative way!*
Of particular interest to me is Jaynes's notion of a 'mind projection fallacy' which he sees as a root cause of much false thinking, including what he sees as the mistaken ascription of randomness to (certain) natural events or processes.
But his case seems to suffer from an overdependence on personal intuition as well as from a lack of historical perspective. For example, he develops** his concept of a mind projection fallacy without (to my knowledge) relating it to other clearly similar or related concepts – from animism to teleological reasoning – which have been widely discussed over the last century-and-a-half.
Jaynes argues that this fallacy is evident not only in the thinking of primitive cultures and amongst uneducated people but also in scientific contexts. He uses his mind projection idea to argue against certain interpretations of probability theory and statistics as well as against certain interpretations of quantum mechanics.
The basic thought seems to be that theoreticians are all too inclined to project their perspectives (their particular states of knowledge or ignorance) on to reality. He rejects, for example, the ascription by probability theorists – and physicists, it seems – of 'randomness' or 'stochastic processes' to nature. He rejects the Copenhagen interpretation of quantum theory as a mere projection of our ignorance.
But, as I say, I find it a bit off-putting that (in the cited paper, at any rate) he not only fails to acknowledge that others have developed and discussed notions very similar to his own, but also – ironically – that he seems to sensationalize and exaggerate the significance of his own insights and intuitions.
More on the substance of his claims later, perhaps.
Let me take this opportunity to thank past readers for their interest and commenters for their comments and to wish everyone a pleasant 2014.
* Like other objective Bayesians, Jaynes sees probabability theory as a formal, axiomatic system, and the calculus of propositions as a special case of the calculus of probabilities.
** Here, for example (PDF).
Some recent findings seem to confirm (though 'confirm' is probably too strong a word) what quantum theory has suggested all along: that there are basic physical processes which are truly random.
I might also mention in this context that, in doing a bit of reading on probability and related matters, I happened to come across some references to, and a paper by, the physicist Edwin Thompson Jaynes (1922-1998). Jaynes promoted the view that probability theory is an extension of logic.
This is intuitively plausible. The concept of truth (and truth tables) lies at the heart of propositional logic, and T is, of course, equivalent to a probability of 1, F to a probability of 0. Probability theory just fills in the bits in between in a quantitative way!*
Of particular interest to me is Jaynes's notion of a 'mind projection fallacy' which he sees as a root cause of much false thinking, including what he sees as the mistaken ascription of randomness to (certain) natural events or processes.
But his case seems to suffer from an overdependence on personal intuition as well as from a lack of historical perspective. For example, he develops** his concept of a mind projection fallacy without (to my knowledge) relating it to other clearly similar or related concepts – from animism to teleological reasoning – which have been widely discussed over the last century-and-a-half.
Jaynes argues that this fallacy is evident not only in the thinking of primitive cultures and amongst uneducated people but also in scientific contexts. He uses his mind projection idea to argue against certain interpretations of probability theory and statistics as well as against certain interpretations of quantum mechanics.
The basic thought seems to be that theoreticians are all too inclined to project their perspectives (their particular states of knowledge or ignorance) on to reality. He rejects, for example, the ascription by probability theorists – and physicists, it seems – of 'randomness' or 'stochastic processes' to nature. He rejects the Copenhagen interpretation of quantum theory as a mere projection of our ignorance.
But, as I say, I find it a bit off-putting that (in the cited paper, at any rate) he not only fails to acknowledge that others have developed and discussed notions very similar to his own, but also – ironically – that he seems to sensationalize and exaggerate the significance of his own insights and intuitions.
More on the substance of his claims later, perhaps.
Let me take this opportunity to thank past readers for their interest and commenters for their comments and to wish everyone a pleasant 2014.
* Like other objective Bayesians, Jaynes sees probabability theory as a formal, axiomatic system, and the calculus of propositions as a special case of the calculus of probabilities.
** Here, for example (PDF).