Noam Chomsky must enjoy making himself unpopular. His extreme (and extremely polarizing) political views are well-known.
I once sat down with one of his political tracts with a genuinely open mind, prepared to give it a go. But all I saw after devoting several hours to the book was anger and rhetorical posturing. I just couldn't figure out where he was coming from.
As I learned later, his attitudes are at least in part explained by his family background. Chomsky grew up in left-Zionist circles. His father was a distinguished Hebrew scholar and both his parents (his mother was more radical than his father, actually) were followers of the views of the essayist Asher Ginsberg. Writing under the name Ahad Ha'am, Ginsberg rejected purely political Zionism and promoted the idea of Jewish cultural and spiritual rebirth. By his early teens, Chomsky had embraced anarchism. He later identified with anarcho-syndicalism, working as an activist for various radical causes, often in association with radical Christians, whose spiritual and moral motivations for political action were similar to his own.
But what of the pioneering linguist? This side of Chomsky interests me, largely because I was taught by one of his students, and adopted many of his linguistic ideas. It matters to me whether (or to what extent) these ideas reflect reality.
So I have begun doing a bit of reading to see what Chomsky is currently saying and how this relates to the current state of research. This interview/article by Yarden Katz is a good place to start, though Chomsky's broad-brush references to intellectual history (Galileo is a big favorite of his) are not totally convincing. Linguistics is not physics, and it's conceivable that there is nothing there to understand in the way early physicists came to understand the principles of classical mechanics.
People generally draw a clear distinction between the two Chomskys, the political activist and the linguist. The former is generally characterized as progressive and radical and the latter, once radical, is now seen as conservative or even reactionary.
In fact Chomsky's scathing attacks on the trend to base research projects in linguistics and the cognitive sciences on Bayesian probability do make him sound like a bit of an intellectual reactionary. But the real issue is whether there is truth in his criticisms.
Bayesian probability is a topic I don't know enough about to write about, but this piece by a graduate student working with Bayesian methods in conjunction with traditional syntactic theory, seems very balanced and makes Chomsky's strictures on Bayesian approaches look a bit simplistic.
The main point I want to make, however, is that the two Chomskys may have more in common than meets the eye. One can see similarities in patterns of argumentation and thought between the political thinker and the linguist. One may also be able to trace some of Chomsky's basic convictions regarding the nature of human thought and language (as well as his political convictions) back to childhood influences.
If you read reports of his talks to linguists*, it's clear that Chomsky is deeply involved in the academic politics of research funding and concerned with the survival of linguistics as a distinct academic discipline, as well as with defending his status and reputation. These academic-political preoccupations (like any political preoccupations) encourage polarized thinking. What counts in the end is one's own side winning, not objective truth. (After all, the winners write the history books, including the intellectual histories.)
Whatever his motivations, Chomsky certainly exhibits a tendency to see things in terms of dichotomies, and is something of a past master of the straw man approach to dealing with challenges.
What, though, of the ideas that are being fought over? This, after all, is where the real interest lies. Does Chomsky's general view of life impinge on (and perhaps distort) his ideas on language and the mind?
The drivers of our thinking are always deep and obscure. Chomsky's longstanding moral (and, indeed, spiritual) preoccupations would, in my view, be likely to have had a profound influence on the way he sees the human mind, as well as reinforcing his views on the status of reason and intellectual intuition.**
I am really only starting to explore Chomsky's cultural and spiritual background, and I may return to these themes in the future and try to make a stronger case. There is much that remains obscure (the extent and nature of his secularism, for example).
I also need to do a bit of homework on some of the topics discussed in the interview. Frankly, I have sometimes found Chomsky's writings on language and thought, and Chomskyan linguistics in general, to be somewhat unclear or opaque, almost arbitrary in fact. I think this probably reflects Chomsky's commitment to a form of rationalism which is quite at odds with my fairly mundane empirical assumptions.
The interview is usually an easy form of discourse to follow and understand, ideal for introducing difficult thinkers to a wider audience, but Katz's interview with Chomsky remains to me at least obscure in parts. And I don't think Katz is to blame.
At first, I was confused by Chomsky's comments on Mendel. On the face of it, the case of Mendel argues for the power of statistical approaches, especially at a time when the basic science is undeveloped. But Chomsky's point essentially that Mendel was aspiring to a deep understanding, and sought significance in the patterns he observed is a fair one.
However, his arguments in favor of unification but against reduction in the sciences are less clear to me.
Chomsky's allusion to the case of chemistry not reducing to an older physics because the older physics was wrong seems in the context of what he is arguing a bit puzzling. Would not this example argue for having more tolerance for statistical and practical approaches which at least are dealing with reality rather than relying on prematurely postulated grand explanatory theories?
Chomsky himself says that cognitive science is at a primitive stage. 'Pre-Galilean', he calls it, but, as I said, I doubt that the comparison with classical mechanics is all that useful.
A more appropriate comparison for what Chomsky has been trying to do these past decades might be Einstein's doomed attempt during the last decades of his life to create a grand unified theory of physics.
Chomsky's thoughts on the origins of human language are very speculative. In fact, his account of a hypothetical individual in a group of non-thinking individuals 'getting language' (through a genetic mutation), and so being able to think, sounds quite far-fetched. (Chomsky, reasoning in a strangely a priori manner, sees language as an internal thing rather than being essentially communicational.)
There is, however, a lot of truth in what he says about science and intellectual fashion, and, yes, about language also.
I am aware that there are deep and serious questions about word order and context-free grammars and so on at issue here about which I have said nothing. Chomsky has made significant contributions to the application of formal language theory to linguistics, and influenced research directions profoundly. Just because other approaches may currently be in vogue does not mean that the work he inspired was misguided.
I suspect that, as our understanding of natural languages (and natural language processing) improves, many of the principles and insights developed by linguists working in the tradition he pioneered will be vindicated (and incorporated, one way or another, into truly effective natural language processing algorithms). But many of the key questions, both philosophical and practical, appear at this stage to remain unresolved.
Finally, a few thoughts on science and history.
Chomsky was asked by Katz about the importance of the philosophy of science and said it may be an interesting area but it doesn't contribute to science. What he considers valuable is the history of science. And he tries, as we have seen, to apply lessons from the history of science to emerging disciplines such as the cognitive sciences.
Though I am skeptical of some of the lessons he purports to derive, it's clear that a knowledge of the history of one's discipline and the history of ideas in general can allow one to put current research and current ideas into some kind of perspective.
Such knowledge is a part of the general culture a scientist might have, rather than a core component of his or her expertise. It's an optional extra, scientifically speaking.
Some people are just more interested in fitting their knowledge into a narrative than others, even sometimes preferring to learn their mathematics, physics, psychology or whatever in part as history.
Others have no interest in history or historical approaches which they see as a waste of time. And so they would be for them.
People have different ways of seeing things, that's all: different strengths, different ways of learning, different aspirations for understanding.
But the odd thing is, for all his talk about history, Chomsky strikes me as a basically and profoundly ahistorical thinker.
His fundamental insights within linguistics focus almost exclusively on the synchronic rather than the diachronic aspects of language, and aspire, in effect, to an abstract rationalism.
And look at the nature of his political work, which is perceived as extreme not just by conservatives but also by mainstream progressive thinkers. A true historical sense would have at least mitigated the free-floating and self-generating logic of his polemics.
In fact, you could make a case that his main concern with history is to mine it for debating points in order to advance his causes, defend his theories, and, by extension, to cement his own position in the narrative of science.
But it can't be denied that Chomsky still retains a certain aura, a certain iconic status. This is due, I think, not just to his achievements, but to an unrelenting seriousness, to a rare combination of intellect and passion.
* This hostile account of Chomsky's performance at an invitation-only event in London last year (by the distinguished linguist Geoffrey Pullum) is very revealing.
** It's worth mentioning in this context that, unlike most social scientists and, perhaps, curiously (given the rigorously abstract and scientistic tenor of his work in syntax and related areas), Chomsky takes literary art seriously and respects the value of the writer's insight into society's moral and psychological complexities. But, again, this becomes much less surprising in the light of his early exposure to (and continuing interest in and respect for) Hebrew literature.
Saturday, December 29, 2012
Monday, December 24, 2012
Here and there
I have just put up a few notes on Conservative Tendency relating to recent problems with comment spam (fixed, I think, for both blogs), and future directions.
As I said there, I will post something on Noam Chomsky's linguistic ideas soon on this blog, but, if I did a critique of his political views, it would go up on Conservative Tendency.
Actually, the statistics for Language, Life and Logic, initially very poor, seem to be picking up. I will certainly persist with it, for the time being at least.
LL&L is not free from ideology entirely, but the idea is to keep it free of that notorious divider of friends and families, partisan politics.*
Merry Christmas.
* In fact, a bit of politics has crept into the Chomsky draft in the sense that I am beginning to see early familial (moral and cultural) influences as relevant to the general way he conceives of thought and language.
As I said there, I will post something on Noam Chomsky's linguistic ideas soon on this blog, but, if I did a critique of his political views, it would go up on Conservative Tendency.
Actually, the statistics for Language, Life and Logic, initially very poor, seem to be picking up. I will certainly persist with it, for the time being at least.
LL&L is not free from ideology entirely, but the idea is to keep it free of that notorious divider of friends and families, partisan politics.*
Merry Christmas.
* In fact, a bit of politics has crept into the Chomsky draft in the sense that I am beginning to see early familial (moral and cultural) influences as relevant to the general way he conceives of thought and language.
Wednesday, December 12, 2012
Kirk dies
Massimo Pigliucci has recently referred to the classic puzzle I alluded to in my previous post on this site about whether the destruction of one's body entails annihilation if a reconstructed version of one's body survives.
He was trying out a new online tool for addressing questions to the philosophical community, and one of his questions concerned 'what happens when Captain Kirk steps into a transporter device'.
The answers he got were all over the place. The most common one amongst philosophical academics interested in metaphysics was: 'Kirk dies, a Trekkie cries.'
But, as a commenter put it: 'Presuming 1) that the Kirk who emerges at the end of the transportation process is physically identical to the Kirk that went in, and 2) there's no untransportable and intangible soul-stuff that makes Kirk Kirk, I don't understand how you can meaningfully say that Kirk has died after being transported.'
The response to Pigliucci's query (coupled with the comment thread attached to his post) illustrates a problem with much philosophical discourse.
The topics being addressed may or may not be interesting or important this one is both, I think but there is no standard or rigorous method for dealing with them, and so no real evidence of a coherent academic discipline (or profession) in operation. (Bear in mind that Derek Parfit set the ball rolling on this particular discussion three decades ago.)
Often philosophical questions seem to be without satisfactory answers (which suggests that the questions are confused, in the sense of carrying too much implicit metaphysical baggage). If an answer comes, it is, more often than not, a mere trigger for counterarguments, and more questions... The process just doesn't move forward most of the time.
Scientific knowledge is relevant to making sense of thought experiments like this, however. For example, if the processes involved violate known science then the whole discussion is just a fantasy and a waste of time. (Bad science fiction, if you like.)
One important scientific issue raised in the discussion relates to whether atoms can be distinguished from one another as macro-objects can. It seems not. Apparently, it just doesn't make sense to ask whether the atoms constituting the reconstructed body are the same ones which constituted the original body or different ones. The very notion of a 'copy' is called into doubt.
Inanimate objects (specifically the Mona Lisa) are discussed in the light of this fact. I think Ian Pollock goes too far in suggesting that if there was a perfect (atom for atom) copy of the Mona Lisa, then the notion of the real, original Mona Lisa would no longer have a clear, objective meaning. It would, surely. It is the one Leonardo actually painted. As Massimo Pigliucci points out, Pollock is not taking history seriously.
And, if the real Mona Lisa is the one actually painted by Leonardo (defined by its history as well as physics), so my real body (on which my subjective consciousness depends) is also defined in part by its history.
The heart of the question of personal identity relates to first-person experience, to my consciousness of being me and being alive. Would Kirk be right to have misgivings about entering the transporter?
Clearly, subjective experience is entirely dependent on a particular physical body. It is the body that is conscious. So, in the end, 'I' am my body in the sense that 'my' fate is inextricably bound up with the fate of this body. If it goes, 'I' go.
I put the pronouns in quotes to indicate that I personally doubt that there is any substantial thing, any entity, which is me. 'I' am a kind of composite of experiences: very basic sentience in the here and now (the sort of thing any living creature even the most basic might have) plus memory. When a certain level of neuronal complexity is reached, you have a sense of a continuing self.
The real mystery lies, in my view, with basic sentience rather than with identity. Sentience is a real, robust phenomenon whereas personal identity is arguably an illusion as there is no 'self', just a (sentient, etc.) body with a complex brain.
At my death nothing of substance will die, apart from the body.
Regarding the transporter, I would have to agree with Massimo Pigliucci that it kills Kirk. The copy may have Kirk's memories, but subjectively Kirk goes into the transporter, is scanned and never wakes up.
To finish, here is a little thought experiment of my own, a little meditation on the nature of death.
We willingly go to sleep at night. We willingly get anaesthetized for an operation. We might also be happy to go into 'cold storage' for a long space journey or to survive a devastating catastrophe on earth (a 'nuclear winter', for example).
But, what if, though we could be certain the hibernation device would not fail to keep our body alive and in a resusitatable state, we just did not know whether or not it would ever get around to waking us up?
Going into such a device becomes exactly equivalent to a game of Russian roulette. Death (as in the death of the body) is functionally equivalent to not waking up, ever. All the death of the body does is make it impossible ever to wake up. It takes away hope.
But, from the point of view of the unconscious person, hope or any kind of expectation is irrelevant. So the experience of death is equivalent to the experience of going into a state of unconsciousness nothing more.
[Update, July 11, 2014: I have realized that there is a flaw in this argument. As soon as I have time I will write a new post explaining what I see the flaw to be.]
He was trying out a new online tool for addressing questions to the philosophical community, and one of his questions concerned 'what happens when Captain Kirk steps into a transporter device'.
The answers he got were all over the place. The most common one amongst philosophical academics interested in metaphysics was: 'Kirk dies, a Trekkie cries.'
But, as a commenter put it: 'Presuming 1) that the Kirk who emerges at the end of the transportation process is physically identical to the Kirk that went in, and 2) there's no untransportable and intangible soul-stuff that makes Kirk Kirk, I don't understand how you can meaningfully say that Kirk has died after being transported.'
The response to Pigliucci's query (coupled with the comment thread attached to his post) illustrates a problem with much philosophical discourse.
The topics being addressed may or may not be interesting or important this one is both, I think but there is no standard or rigorous method for dealing with them, and so no real evidence of a coherent academic discipline (or profession) in operation. (Bear in mind that Derek Parfit set the ball rolling on this particular discussion three decades ago.)
Often philosophical questions seem to be without satisfactory answers (which suggests that the questions are confused, in the sense of carrying too much implicit metaphysical baggage). If an answer comes, it is, more often than not, a mere trigger for counterarguments, and more questions... The process just doesn't move forward most of the time.
Scientific knowledge is relevant to making sense of thought experiments like this, however. For example, if the processes involved violate known science then the whole discussion is just a fantasy and a waste of time. (Bad science fiction, if you like.)
One important scientific issue raised in the discussion relates to whether atoms can be distinguished from one another as macro-objects can. It seems not. Apparently, it just doesn't make sense to ask whether the atoms constituting the reconstructed body are the same ones which constituted the original body or different ones. The very notion of a 'copy' is called into doubt.
Inanimate objects (specifically the Mona Lisa) are discussed in the light of this fact. I think Ian Pollock goes too far in suggesting that if there was a perfect (atom for atom) copy of the Mona Lisa, then the notion of the real, original Mona Lisa would no longer have a clear, objective meaning. It would, surely. It is the one Leonardo actually painted. As Massimo Pigliucci points out, Pollock is not taking history seriously.
And, if the real Mona Lisa is the one actually painted by Leonardo (defined by its history as well as physics), so my real body (on which my subjective consciousness depends) is also defined in part by its history.
The heart of the question of personal identity relates to first-person experience, to my consciousness of being me and being alive. Would Kirk be right to have misgivings about entering the transporter?
Clearly, subjective experience is entirely dependent on a particular physical body. It is the body that is conscious. So, in the end, 'I' am my body in the sense that 'my' fate is inextricably bound up with the fate of this body. If it goes, 'I' go.
I put the pronouns in quotes to indicate that I personally doubt that there is any substantial thing, any entity, which is me. 'I' am a kind of composite of experiences: very basic sentience in the here and now (the sort of thing any living creature even the most basic might have) plus memory. When a certain level of neuronal complexity is reached, you have a sense of a continuing self.
The real mystery lies, in my view, with basic sentience rather than with identity. Sentience is a real, robust phenomenon whereas personal identity is arguably an illusion as there is no 'self', just a (sentient, etc.) body with a complex brain.
At my death nothing of substance will die, apart from the body.
Regarding the transporter, I would have to agree with Massimo Pigliucci that it kills Kirk. The copy may have Kirk's memories, but subjectively Kirk goes into the transporter, is scanned and never wakes up.
To finish, here is a little thought experiment of my own, a little meditation on the nature of death.
We willingly go to sleep at night. We willingly get anaesthetized for an operation. We might also be happy to go into 'cold storage' for a long space journey or to survive a devastating catastrophe on earth (a 'nuclear winter', for example).
But, what if, though we could be certain the hibernation device would not fail to keep our body alive and in a resusitatable state, we just did not know whether or not it would ever get around to waking us up?
Going into such a device becomes exactly equivalent to a game of Russian roulette. Death (as in the death of the body) is functionally equivalent to not waking up, ever. All the death of the body does is make it impossible ever to wake up. It takes away hope.
But, from the point of view of the unconscious person, hope or any kind of expectation is irrelevant. So the experience of death is equivalent to the experience of going into a state of unconsciousness nothing more.
[Update, July 11, 2014: I have realized that there is a flaw in this argument. As soon as I have time I will write a new post explaining what I see the flaw to be.]
Saturday, December 1, 2012
Death or immortality?
Be assured that I am not prone to having mystical experiences, but I do it must be said seem to spend an inordinate amount of time in that twilight zone between sleep and waking (either going into or coming out of sleep). And in such a state for some hours very early one morning I wrestled with the question of death and came to a conclusion.
At the end of it all I felt absolutely sure that I (and presumably you too) would never could never be totally snuffed out.
There are two basic ways of responding to such an 'insight', as I see it: to take it at face value (as I did at the time), or to take it merely as evidence for how our brains work.
On the second (and more plausible) interpretation, all I did on that sleepless morning was to demonstrate to myself that my conscious self (due to the limitations of my brain and presumably all human brains) was incapable of conceiving of its own future nonexistence.
I know some people claim to be able to conceive of their future nonexistence (and to be quite happy about the prospect), but I would argue (like Matthew Hutson) that such people are still imagining themselves as a faint presence in their own post-death future.
Of course, nothing concerning the reality or non-reality of survival can be inferred from the fact (if it is a fact) that we cannot conceive of our own individual deaths.
Just getting clear what (if anything) personal identity is and making sense of the notion of such an entity surviving the death of the body with which it had been associated is a very difficult task. Sometimes I think it is a futile one.
I may have more to say about these and related questions in the future, but, just to give an indication of the sort of thinking which I think touches on the nub of the problem, I want to mention a classic thought experiment devised by Derek Parfit.
Briefly, it is about a choice of means of transport. It is some time in the future and you need to visit Mars on a regular basis. You have the slow option of a space ship; or the speed-of-light option which involves a Star Trek-like scanner which records your body's exact physical state and sends the information to Mars where you are reconstructed, memories intact. The scanning process is fatal, but it doesn't matter as you will be aware only of having arrived on Mars.
Parfit thinks that people would get over their initial reluctance to use the new system very quickly, and that we wouldn't feel as though the reconstructed people were just copies of defunct originals.
But what if a number of copies were made? And, most importantly, how do I know that if I was scanned etc., I would, from a subjective point of view, 'wake up' on Mars (or anywhere), rather than just die, pure and simple, copy or no copy?*
Now, all this may sound very hypothetical and irrelevant to whether you or I will survive (in some sense) our respective deaths. But new developments in cosmology, notably the theory of eternal inflation, make it very relevant. For it appears likely that exact (and not so exact) copies of us do in fact exist in distant and forever inaccessible reaches of an unimaginably large and expanding complex of universes (variously called the multiverse or the megaverse).
I know it sounds fanciful, but leading physicists have put forward such views; and, though I remain personally skeptical about particular theories, the notion that the cosmos is (infinitely) more than what we can observe or even potentially have access to is very plausible and generally accepted in the physics community.
In the end, the (possible) existence of duplicate and similar worlds probably has no bearing on whether my subjective sense of self will be extinguished at my death. It is, however, a comfort to know that the cosmos may not be as boringly bounded as mid-20th century science suggested.
It may be going too far to say that anything is possible, but the vista of possibilities has certainly expanded.
*Parfit doesn't believe we relate to the future (or the future relates to us) in the way we think we do (or in the way we think it does). As I recall, he even suggests that the future should not concern us any more than the past.
When I first read Parfit's book Reasons and Persons I struggled with this idea for a while, but finally gave it up as being inconsistent with the fact that, as individuals, we plan (or fail to plan) for the future and enjoy or suffer the consequences. (Parfit's view would be, I presume, that these experiences were not being had by a self-entity that moved from the past to the future or perhaps by any entity at all.)
At the end of it all I felt absolutely sure that I (and presumably you too) would never could never be totally snuffed out.
There are two basic ways of responding to such an 'insight', as I see it: to take it at face value (as I did at the time), or to take it merely as evidence for how our brains work.
On the second (and more plausible) interpretation, all I did on that sleepless morning was to demonstrate to myself that my conscious self (due to the limitations of my brain and presumably all human brains) was incapable of conceiving of its own future nonexistence.
I know some people claim to be able to conceive of their future nonexistence (and to be quite happy about the prospect), but I would argue (like Matthew Hutson) that such people are still imagining themselves as a faint presence in their own post-death future.
Of course, nothing concerning the reality or non-reality of survival can be inferred from the fact (if it is a fact) that we cannot conceive of our own individual deaths.
Just getting clear what (if anything) personal identity is and making sense of the notion of such an entity surviving the death of the body with which it had been associated is a very difficult task. Sometimes I think it is a futile one.
I may have more to say about these and related questions in the future, but, just to give an indication of the sort of thinking which I think touches on the nub of the problem, I want to mention a classic thought experiment devised by Derek Parfit.
Briefly, it is about a choice of means of transport. It is some time in the future and you need to visit Mars on a regular basis. You have the slow option of a space ship; or the speed-of-light option which involves a Star Trek-like scanner which records your body's exact physical state and sends the information to Mars where you are reconstructed, memories intact. The scanning process is fatal, but it doesn't matter as you will be aware only of having arrived on Mars.
Parfit thinks that people would get over their initial reluctance to use the new system very quickly, and that we wouldn't feel as though the reconstructed people were just copies of defunct originals.
But what if a number of copies were made? And, most importantly, how do I know that if I was scanned etc., I would, from a subjective point of view, 'wake up' on Mars (or anywhere), rather than just die, pure and simple, copy or no copy?*
Now, all this may sound very hypothetical and irrelevant to whether you or I will survive (in some sense) our respective deaths. But new developments in cosmology, notably the theory of eternal inflation, make it very relevant. For it appears likely that exact (and not so exact) copies of us do in fact exist in distant and forever inaccessible reaches of an unimaginably large and expanding complex of universes (variously called the multiverse or the megaverse).
I know it sounds fanciful, but leading physicists have put forward such views; and, though I remain personally skeptical about particular theories, the notion that the cosmos is (infinitely) more than what we can observe or even potentially have access to is very plausible and generally accepted in the physics community.
In the end, the (possible) existence of duplicate and similar worlds probably has no bearing on whether my subjective sense of self will be extinguished at my death. It is, however, a comfort to know that the cosmos may not be as boringly bounded as mid-20th century science suggested.
It may be going too far to say that anything is possible, but the vista of possibilities has certainly expanded.
*Parfit doesn't believe we relate to the future (or the future relates to us) in the way we think we do (or in the way we think it does). As I recall, he even suggests that the future should not concern us any more than the past.
When I first read Parfit's book Reasons and Persons I struggled with this idea for a while, but finally gave it up as being inconsistent with the fact that, as individuals, we plan (or fail to plan) for the future and enjoy or suffer the consequences. (Parfit's view would be, I presume, that these experiences were not being had by a self-entity that moved from the past to the future or perhaps by any entity at all.)
Labels:
death,
Derek Parfit,
immortality,
Matthew Hutson,
personal identity
Tuesday, November 20, 2012
Where the mystery lies
If I appear to have a bit of an obsession with David Berlinski's writings, it may be because I share one or two of his obsessions (don't ask); or just that I have been charmed by his politically incorrect persona. But let me make it clear that I disagree utterly with his ultimate conclusions about human life and science with his basic view of the world, in fact.
In a way, he is my ideal interlocutor, a deft articulator of a point of view I respect but reject.
He sees the scientific view of the world (which most of us implicitly accept) as being essentially ideological, as a set of commitments
… conceived without justification, the commitments determining the evidence rather than the reverse, and this by means of a psychological process as difficult to discern as it is to deny. The largest of these commitments, and the one least examined because most tenaciously held, is that the universe is nothing more than a system of material objects. Beyond this system nothing. A universe of this sort might seem repugnant to most men and women, but many physical scientists have proclaimed themselves satisfied by a world in which there is nothing but atoms and the void, and they look forward to their forthcoming dissolution into material constituents with cheerful nihilism.
An uneasy sense nonetheless prevails it has long prevailed that the vision of a purely physical or material universe is somehow incomplete; it cannot encompasses the familiar but inescapable facts of ordinary life. A man speaks, sending waves into the air. A woman listens, the tiny and exquisite bones in her inner ear vibrating sympathetically to the splashes of his voice. The purely physical exchange having been made, what has been sound becomes what has been said; heated by the urgency of communication, the sounds begin to glow with meaning so that an undulating current in the air can convey a lyric poem, issue a declaration of war, or say with terrible finality that it's over. Making sense of sounds is something every human being does and that nothing else can do. More than three generations of mathematical physicists grew old before their successors understood black-body radiation; the association between sound and meaning is more mysterious than anything found in physics. And we, too, are waiting for our successors.*
Part of his method, evidently, is to exhaust (exasperate?) us with his rhetoric. But here are a few unrhetorical points in reply.
Firstly, let's accept that old fashioned, commonsense materialism is, in the light of quantum mechanics and a computational perspective on the world, no longer viable. Even so, the world can still be seen as a physical, if not a 'material', system. The 'atoms' of this world can be seen as (physical) events or processes rather than as little bits of stuff.
Is such a view incomplete? Of course. It is concerned only with what underlies and ultimately generates and powers the pageant, not with the pageant itself. The pageant of life needs to be suffered or enjoyed or analysed or interpreted in its own terms.
Berlinski is right to suggest that human language is wondrous and unique, but wrong to see deep mystery in the meaning of sentences. If there is a deep mystery of meaning, it resides also in animal communication systems, surely. And those more primitive systems would lie closer to the mystery's source.
In fact, in my view, the mystery lies if it lies anywhere with subjective experience rather than with communication; and, of course, in the broader question of why there is anything at all.**
* The Advent of the Algorithm (Harcourt 2001), pp. 249-250.
** By the way, the latter question is connected to the first, because, in a sense, a world of inanimate objects, objects without a subjective sense (in other words a world with no one to see it, even to observe its traces as we do the early universe) is equivalent to nothingness, is it not?
In a way, he is my ideal interlocutor, a deft articulator of a point of view I respect but reject.
He sees the scientific view of the world (which most of us implicitly accept) as being essentially ideological, as a set of commitments
… conceived without justification, the commitments determining the evidence rather than the reverse, and this by means of a psychological process as difficult to discern as it is to deny. The largest of these commitments, and the one least examined because most tenaciously held, is that the universe is nothing more than a system of material objects. Beyond this system nothing. A universe of this sort might seem repugnant to most men and women, but many physical scientists have proclaimed themselves satisfied by a world in which there is nothing but atoms and the void, and they look forward to their forthcoming dissolution into material constituents with cheerful nihilism.
An uneasy sense nonetheless prevails it has long prevailed that the vision of a purely physical or material universe is somehow incomplete; it cannot encompasses the familiar but inescapable facts of ordinary life. A man speaks, sending waves into the air. A woman listens, the tiny and exquisite bones in her inner ear vibrating sympathetically to the splashes of his voice. The purely physical exchange having been made, what has been sound becomes what has been said; heated by the urgency of communication, the sounds begin to glow with meaning so that an undulating current in the air can convey a lyric poem, issue a declaration of war, or say with terrible finality that it's over. Making sense of sounds is something every human being does and that nothing else can do. More than three generations of mathematical physicists grew old before their successors understood black-body radiation; the association between sound and meaning is more mysterious than anything found in physics. And we, too, are waiting for our successors.*
Part of his method, evidently, is to exhaust (exasperate?) us with his rhetoric. But here are a few unrhetorical points in reply.
Firstly, let's accept that old fashioned, commonsense materialism is, in the light of quantum mechanics and a computational perspective on the world, no longer viable. Even so, the world can still be seen as a physical, if not a 'material', system. The 'atoms' of this world can be seen as (physical) events or processes rather than as little bits of stuff.
Is such a view incomplete? Of course. It is concerned only with what underlies and ultimately generates and powers the pageant, not with the pageant itself. The pageant of life needs to be suffered or enjoyed or analysed or interpreted in its own terms.
Berlinski is right to suggest that human language is wondrous and unique, but wrong to see deep mystery in the meaning of sentences. If there is a deep mystery of meaning, it resides also in animal communication systems, surely. And those more primitive systems would lie closer to the mystery's source.
In fact, in my view, the mystery lies if it lies anywhere with subjective experience rather than with communication; and, of course, in the broader question of why there is anything at all.**
* The Advent of the Algorithm (Harcourt 2001), pp. 249-250.
** By the way, the latter question is connected to the first, because, in a sense, a world of inanimate objects, objects without a subjective sense (in other words a world with no one to see it, even to observe its traces as we do the early universe) is equivalent to nothingness, is it not?
Labels:
communication,
language,
mystery,
physicalism,
sentience
Monday, November 5, 2012
Williams syndrome, language and the brain
In recent posts I have made a number of claims about language and the brain. Allow me to clarify and develop a couple of points.
I don't really want to buy into the debate about various versions of modularity or other theories of mental functioning. For one thing, I don't know the science well enough. I don't have a theory, but I don't know that I need one either.
Which is not to say that it is not important to have a basic understanding of how our minds work. My point is that such an understanding needn't take the form of a theory. It may simply develop from a general (or specialist) knowledge of pertinent disciplines (such as psychology or linguistics), and as a considered response to various kinds of evidence. I am particularly interested in the evidence provided by injuries and genetic disorders which affect cognitive and emotional functioning.
Certain genetically-caused disorders and brain injuries seem to provide evidence that language is in some sense a distinct system - or rather a set of systems - even if it interacts (as it obviously does) with non-linguistic processes. How else can you account for people who have a language deficit but can think well in other respects, or, conversely, who may be seriously cognitively impaired and yet maintain excellent language abilities?
Take Williams syndrome, for instance. It is a genetic disorder characterized by a range of medical problems, developmental delays and learning disabilities. Children with this condition seek interaction with others but are very vulnerable as they lack normal caution and social understanding. They are typically unable to cope with numbers and abstract reasoning. They also have impaired gross and fine motor skills.
On the positive side, they often have an affinity for music (and perfect pitch). And they also tend to do well linguistically, at least in certain respects.
Williams syndrome, like so many other conditions which impact on brain function, is selective in its effects. If specific aspects of thinking are adversely affected while other specific aspects are not affected or are enhanced, then this certainly supports the view that the brain consists of many (interacting) systems and sub-systems.
Linguists, of course, see language from various points of view corresponding to various sub-disciplines: phonetics (where the focus is on the actual sounds of language), phonology (more abstract), morphology and syntax, semantics, pragmatics, etc. In other words, language has many aspects, so it is misleading to talk about language ability without specifying exactly what one is talking about.
Likewise, it is not particularly helpful to talk about the brain's capacity for language per se. Better to focus on the particular processes which language use requires, like hearing (or seeing in the case of reading); interpreting the raw data (identifying phonemes and lexemes, parsing, etc.) and so understanding; or speaking (which involves not only mentation but also a very complex sequence of fine motor processes).
Children with Williams syndrome are typically slow to start speaking. This is presumably related at least in part to their fine motor problems. Most reference sources say that older children and adults with WS speak fluently and grammatically and have a good concrete, practical vocabulary (though abstract vocabulary remains deficient).
I picked Williams syndrome to focus on in this post because of an anecdotal report I remembered reading about a profoundly retarded girl with WS who nonetheless had an unusually extensive vocabulary and was able to invent strikingly original stories and fantasies. But the more I read about Williams syndrome the more complicated - and equivocal - the picture looks.
For example, consider this (from a recent research report* abstract): 'Williams syndrome (WS) is a neurodevelopmental genetic disorder, often referred [to] as being characterized by dissociation between verbal and non-verbal abilities, although a number of studies disputing this proposal is emerging.'
And in their own study the researchers found significantly more speech disfluencies (hesitations, repetitions, pauses) in the WS group than in a typically-developing group.
So the lesson of my story is that everything concerning the human brain is likely to be more complicated than it seems, and that only scientific findings - rather than models or theories - can give specific answers to specific questions. Of course, science requires its models and theories, but they are always provisional, a means to an end.
And, in the context of such reflections, it is hardly surprising that I find myself becoming more and more skeptical about certain Chomskian assumptions which have been part of my mental furniture since I took a linguistics course taught by one of the Master's protégés a couple of decades ago.
* Rossi, N.F. et al. 'Analysis of speech fluency in Williams syndrome.' Res. Dev. Disabil. 32(6) (2011): 2957-62.
I don't really want to buy into the debate about various versions of modularity or other theories of mental functioning. For one thing, I don't know the science well enough. I don't have a theory, but I don't know that I need one either.
Which is not to say that it is not important to have a basic understanding of how our minds work. My point is that such an understanding needn't take the form of a theory. It may simply develop from a general (or specialist) knowledge of pertinent disciplines (such as psychology or linguistics), and as a considered response to various kinds of evidence. I am particularly interested in the evidence provided by injuries and genetic disorders which affect cognitive and emotional functioning.
Certain genetically-caused disorders and brain injuries seem to provide evidence that language is in some sense a distinct system - or rather a set of systems - even if it interacts (as it obviously does) with non-linguistic processes. How else can you account for people who have a language deficit but can think well in other respects, or, conversely, who may be seriously cognitively impaired and yet maintain excellent language abilities?
Take Williams syndrome, for instance. It is a genetic disorder characterized by a range of medical problems, developmental delays and learning disabilities. Children with this condition seek interaction with others but are very vulnerable as they lack normal caution and social understanding. They are typically unable to cope with numbers and abstract reasoning. They also have impaired gross and fine motor skills.
On the positive side, they often have an affinity for music (and perfect pitch). And they also tend to do well linguistically, at least in certain respects.
Williams syndrome, like so many other conditions which impact on brain function, is selective in its effects. If specific aspects of thinking are adversely affected while other specific aspects are not affected or are enhanced, then this certainly supports the view that the brain consists of many (interacting) systems and sub-systems.
Linguists, of course, see language from various points of view corresponding to various sub-disciplines: phonetics (where the focus is on the actual sounds of language), phonology (more abstract), morphology and syntax, semantics, pragmatics, etc. In other words, language has many aspects, so it is misleading to talk about language ability without specifying exactly what one is talking about.
Likewise, it is not particularly helpful to talk about the brain's capacity for language per se. Better to focus on the particular processes which language use requires, like hearing (or seeing in the case of reading); interpreting the raw data (identifying phonemes and lexemes, parsing, etc.) and so understanding; or speaking (which involves not only mentation but also a very complex sequence of fine motor processes).
Children with Williams syndrome are typically slow to start speaking. This is presumably related at least in part to their fine motor problems. Most reference sources say that older children and adults with WS speak fluently and grammatically and have a good concrete, practical vocabulary (though abstract vocabulary remains deficient).
I picked Williams syndrome to focus on in this post because of an anecdotal report I remembered reading about a profoundly retarded girl with WS who nonetheless had an unusually extensive vocabulary and was able to invent strikingly original stories and fantasies. But the more I read about Williams syndrome the more complicated - and equivocal - the picture looks.
For example, consider this (from a recent research report* abstract): 'Williams syndrome (WS) is a neurodevelopmental genetic disorder, often referred [to] as being characterized by dissociation between verbal and non-verbal abilities, although a number of studies disputing this proposal is emerging.'
And in their own study the researchers found significantly more speech disfluencies (hesitations, repetitions, pauses) in the WS group than in a typically-developing group.
So the lesson of my story is that everything concerning the human brain is likely to be more complicated than it seems, and that only scientific findings - rather than models or theories - can give specific answers to specific questions. Of course, science requires its models and theories, but they are always provisional, a means to an end.
And, in the context of such reflections, it is hardly surprising that I find myself becoming more and more skeptical about certain Chomskian assumptions which have been part of my mental furniture since I took a linguistics course taught by one of the Master's protégés a couple of decades ago.
* Rossi, N.F. et al. 'Analysis of speech fluency in Williams syndrome.' Res. Dev. Disabil. 32(6) (2011): 2957-62.
Labels:
language,
linguistics,
Noam Chomsky,
the brain,
Williams syndrome
Friday, October 26, 2012
Quantum lemonade
Seth Lloyd's popular book* on quantum computation, life and the universe impressed me when I first read it a few years ago. I had the sense that Lloyd was saying something very important for our understanding of reality, of what ultimately underlies the whole shebang.
I still think the basic thesis of the book - that the cosmos is a quantum computer - is fascinating and maybe even true. Certainly, the parallels between thermodynamics and information theory suggest that information (bits, or qubits, and their operations) is absolutely fundamental to an understanding of the world and - speaking very loosely - the basic stuff out of which we and the cosmos are made.
But this recent article by Seth Lloyd disappointed me in a couple of ways.
Lloyd's book is beautifully written, a model of popular science writing. The science is clearly and simply presented, and there is some good - if at times only tangentially relevant - autobiographical background material. (The story of the death of Heinz Pagels is unforgettable. 'Heartbreaking', one reviewer called it.)
By contrast the article is in large part a rehash of things Lloyd has said many times before (for example, about the recalcitrance of atoms and sub-atomic particles, their reluctance to do what we want them to do and the need for infinite guile and patience on the part of quantum engineers). And unfortunately the metaphors are strained and distracting, in my opinion, and just a touch condescending. I think Lloyd is trying too hard not to sound like a boffin.
But the most significant thing about this recent piece is that in it Lloyd doesn't attempt (as he might well have done) to talk up the prospects for serious quantum computers. On the contrary, the whole program to develop and build useful quantum computers, about which he was so sanguine in his book, is presented as being somewhat problematic.
He writes: "The quantum sensitivity [Nobel Prize-winner Serge] Haroche identified certainly makes quantum computers hard to build, but it's also that very sensitivity that makes funky quantum phenomena such as Schrödinger's cat states the basis for hypersensitive detectors and measurement devices... What's bad for quantum computation is good for precision measurement - if life deals you quantum lemons, make quantum lemonade."
In other words, if we can't have miraculously powerful computers of an entirely new kind, we can at least have very accurate clocks. Mmm.
Guess I was a bit naïve to believe the hype.
•••••••••••••••••
Come to think of it, years ago I was quite excited about artificial intelligence. And they can't even do a convincing natural language interface yet.
Frankly, though, I don't much care about whether these technologies eventuate or not. What interests me more is the light that research into computing - digital and quantum - has thrown on some perennial questions.
The old answers to fundamental questions are just no good any more. And if some of the old answers do get a new lease on life, it will only be, I suspect, because they happened to prefigure an explanation informed by information theory, quantum mechanics and/or other recent theoretical work in physics or related sciences.
There is hype about technology and hype about basic science. But the fact is, though progress seems slow in both spheres, progress is indeed occurring.
Which is more than can be said of perhaps any other area of human life or endeavour.
* Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos (Knopf, 2006).
I still think the basic thesis of the book - that the cosmos is a quantum computer - is fascinating and maybe even true. Certainly, the parallels between thermodynamics and information theory suggest that information (bits, or qubits, and their operations) is absolutely fundamental to an understanding of the world and - speaking very loosely - the basic stuff out of which we and the cosmos are made.
But this recent article by Seth Lloyd disappointed me in a couple of ways.
Lloyd's book is beautifully written, a model of popular science writing. The science is clearly and simply presented, and there is some good - if at times only tangentially relevant - autobiographical background material. (The story of the death of Heinz Pagels is unforgettable. 'Heartbreaking', one reviewer called it.)
By contrast the article is in large part a rehash of things Lloyd has said many times before (for example, about the recalcitrance of atoms and sub-atomic particles, their reluctance to do what we want them to do and the need for infinite guile and patience on the part of quantum engineers). And unfortunately the metaphors are strained and distracting, in my opinion, and just a touch condescending. I think Lloyd is trying too hard not to sound like a boffin.
But the most significant thing about this recent piece is that in it Lloyd doesn't attempt (as he might well have done) to talk up the prospects for serious quantum computers. On the contrary, the whole program to develop and build useful quantum computers, about which he was so sanguine in his book, is presented as being somewhat problematic.
He writes: "The quantum sensitivity [Nobel Prize-winner Serge] Haroche identified certainly makes quantum computers hard to build, but it's also that very sensitivity that makes funky quantum phenomena such as Schrödinger's cat states the basis for hypersensitive detectors and measurement devices... What's bad for quantum computation is good for precision measurement - if life deals you quantum lemons, make quantum lemonade."
In other words, if we can't have miraculously powerful computers of an entirely new kind, we can at least have very accurate clocks. Mmm.
Guess I was a bit naïve to believe the hype.
•••••••••••••••••
Come to think of it, years ago I was quite excited about artificial intelligence. And they can't even do a convincing natural language interface yet.
Frankly, though, I don't much care about whether these technologies eventuate or not. What interests me more is the light that research into computing - digital and quantum - has thrown on some perennial questions.
The old answers to fundamental questions are just no good any more. And if some of the old answers do get a new lease on life, it will only be, I suspect, because they happened to prefigure an explanation informed by information theory, quantum mechanics and/or other recent theoretical work in physics or related sciences.
There is hype about technology and hype about basic science. But the fact is, though progress seems slow in both spheres, progress is indeed occurring.
Which is more than can be said of perhaps any other area of human life or endeavour.
* Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos (Knopf, 2006).
Friday, October 12, 2012
One, two, many
The question of the relevance of natural language to counting and calculating capacities has been raised in recent years in connection with two similar research projects.
In a well-known study published in 2004, the counting abilities of the Piraha people of the Amazon were examined. Their language lacks number words (other than for one and two). The researchers suggested that language (rather than other societal or environmental factors) was the crucial factor in explaining the poor counting abilities of members of this tribe.
Though not everyone was convinced by the researchers' claims, more recent research on several adults in Nicaragua who were born deaf and never learned Spanish or a formal sign language provided some slightly more convincing evidence of the importance of language for counting ability.
Elizabeth Spaepen (of the University of Chicago) and her colleagues conducted experiments involving, for example, the experimenter knocking her fist against the subject's fist a number of times and asking the subject to respond with the same number of knocks. (Iteration, note, rather than objects.)
'So if I were to knock four times on their fist,' commented Dr Spaepen, 'they might knock my fist five times.' *
The earlier research on the Piraha involved similar tests and similar results, but there was nothing to say that language was the crucial factor. A stronger case for language being the key factor can be made on the basis of the more recent research, as the Nicaraguans, unlike the Piraha, were living in a culture rich in counting systems.
Daniel Casasanto (of the Max Planck Institute for Psycholinguistics) points out that the human brain is good at approximating, e.g. distinguishing between ten and twenty objects, but needs a counting system to distinguish between ten and eleven, say.
'What language does,' he explains, 'is give you a means of linking up our small, exact number abilities with our large, approximate number abilities.'
As I see it, language provides for individuals, societies and cultures a kind of bridge to sophisticated forms of counting and calculation. Number words (in conjunction with other aids like fingers) facilitate simple forms of counting and these form a basis for more advanced techniques incorporating symbols and calculating devices.
Though number words are an intrinsic part of language, counting systems by and large are not. And - significantly - the more sophisticated the counting and calculating systems are, the less dependent they are on natural language.
So I don't see any necessary or intrinsic link between natural language and counting systems.
Historically, it may well be that only societies with number words went on to develop sophisticated counting systems and mathematics generally. And it may well be that, for most human children, learning number words is a prerequisite for learning to count and do basic arithmetic.
But this does not mean that arithmetic is in any fundamental way dependent on natural language.
Even in terms of human psychology, the link between language and calculating ability is pretty tenuous.
Think of autistic savants, for example. Are there not many instances of individuals who lack the ability to use and process language and yet whose brains display advanced calculating abilities?
* Wittgenstein would have had a field day with this!
In a well-known study published in 2004, the counting abilities of the Piraha people of the Amazon were examined. Their language lacks number words (other than for one and two). The researchers suggested that language (rather than other societal or environmental factors) was the crucial factor in explaining the poor counting abilities of members of this tribe.
Though not everyone was convinced by the researchers' claims, more recent research on several adults in Nicaragua who were born deaf and never learned Spanish or a formal sign language provided some slightly more convincing evidence of the importance of language for counting ability.
Elizabeth Spaepen (of the University of Chicago) and her colleagues conducted experiments involving, for example, the experimenter knocking her fist against the subject's fist a number of times and asking the subject to respond with the same number of knocks. (Iteration, note, rather than objects.)
'So if I were to knock four times on their fist,' commented Dr Spaepen, 'they might knock my fist five times.' *
The earlier research on the Piraha involved similar tests and similar results, but there was nothing to say that language was the crucial factor. A stronger case for language being the key factor can be made on the basis of the more recent research, as the Nicaraguans, unlike the Piraha, were living in a culture rich in counting systems.
Daniel Casasanto (of the Max Planck Institute for Psycholinguistics) points out that the human brain is good at approximating, e.g. distinguishing between ten and twenty objects, but needs a counting system to distinguish between ten and eleven, say.
'What language does,' he explains, 'is give you a means of linking up our small, exact number abilities with our large, approximate number abilities.'
As I see it, language provides for individuals, societies and cultures a kind of bridge to sophisticated forms of counting and calculation. Number words (in conjunction with other aids like fingers) facilitate simple forms of counting and these form a basis for more advanced techniques incorporating symbols and calculating devices.
Though number words are an intrinsic part of language, counting systems by and large are not. And - significantly - the more sophisticated the counting and calculating systems are, the less dependent they are on natural language.
So I don't see any necessary or intrinsic link between natural language and counting systems.
Historically, it may well be that only societies with number words went on to develop sophisticated counting systems and mathematics generally. And it may well be that, for most human children, learning number words is a prerequisite for learning to count and do basic arithmetic.
But this does not mean that arithmetic is in any fundamental way dependent on natural language.
Even in terms of human psychology, the link between language and calculating ability is pretty tenuous.
Think of autistic savants, for example. Are there not many instances of individuals who lack the ability to use and process language and yet whose brains display advanced calculating abilities?
* Wittgenstein would have had a field day with this!
Friday, September 21, 2012
Numbers and language
Number words come in various categories. There are the cardinal numbers (in English: one, two, three...), the ordinal numbers (first, second, third...) and the adverbial numbers (once, twice, thrice). There are also other number-word categories, but my special interest here is in the adverbials.
On the face of it, the cardinal numbers seem more linguistically primitive in the sense that they constitute the basic form upon which the adverbials are built. For example, the Middle English ones (= once) is an inflected form (genitive) of the Middle English word on (= one).
In Latin the situation is slightly different. The first few adverbials (semel, bis, ter, quater) are not obviously derived from - though all but the first are related to - the equivalent cardinals. Semel comes from the Proto-Indo-European *sem.
But irrespective of which category of number represents the earliest form linguistically, a case could be made that mathematically (and logically) the adverbial form is the basic one.
This is just a preliminary idea and I don't want to make too much of it. Also, I am aware of the use of cardinals and ordinals in set theory which makes it more difficult to make my point clearly. I am not talking set theory here.
The idea that the adverbials are somehow basic attracts me because it seems to provide a way of looking at numbers which tends to undermine (or at least not encourage) mathematical Platonism.
Focusing on the cardinals encourages mathematical Platonism because, even though it was through counting actual things that cardinal number words no doubt arose, their usefulness lies in their not being tied to any one kind of thing and so being applicable to anything countable. Inevitably, the cardinal numbers came to be seen as objects themselves, existing in an abstract realm.
If, on the other hand, we see numbers as being based on, or deriving from, the iteration process, then our focus moves from static objects and a timeless Platonistic realm to the ordinary world we all inhabit, of processes or actions which may (or may not) be repeated.
Interestingly, not only the Latin word semel (= once) but also the English word 'same' derives ultimately from the Proto-Indo-European *sem.
It's odd to think of certain modern English expressions as having such an ancient lineage.
"Same again," for example. (Licensing the re-execution of a previous order, a particular drink at a bar, say, and thus requiring the barman to go through roughly the same motions twice (or thrice...).)
This way of conceptualizing number is quite as natural as counting apples or oranges, and may, as I suggested, provide a good basis for a non-Platonistic and altogether more satisfactory way of understanding mathematics.
Modern mathematical Platonism is a long way from Plato, but it shares with Plato a static view of (mathematical) reality. It is at odds not only with the dynamic character of ordinary life and experience but also with the new ways of looking at things which the digital revolution of the last century has encouraged.
(I am currently looking at how the ideas of one of the great twentieth-century logicians, Alonzo Church, may relate to this notion of number as iteration and to mathematical Platonism. More later, perhaps. It's heavy stuff and may not be worth the trouble!)
On the face of it, the cardinal numbers seem more linguistically primitive in the sense that they constitute the basic form upon which the adverbials are built. For example, the Middle English ones (= once) is an inflected form (genitive) of the Middle English word on (= one).
In Latin the situation is slightly different. The first few adverbials (semel, bis, ter, quater) are not obviously derived from - though all but the first are related to - the equivalent cardinals. Semel comes from the Proto-Indo-European *sem.
But irrespective of which category of number represents the earliest form linguistically, a case could be made that mathematically (and logically) the adverbial form is the basic one.
This is just a preliminary idea and I don't want to make too much of it. Also, I am aware of the use of cardinals and ordinals in set theory which makes it more difficult to make my point clearly. I am not talking set theory here.
The idea that the adverbials are somehow basic attracts me because it seems to provide a way of looking at numbers which tends to undermine (or at least not encourage) mathematical Platonism.
Focusing on the cardinals encourages mathematical Platonism because, even though it was through counting actual things that cardinal number words no doubt arose, their usefulness lies in their not being tied to any one kind of thing and so being applicable to anything countable. Inevitably, the cardinal numbers came to be seen as objects themselves, existing in an abstract realm.
If, on the other hand, we see numbers as being based on, or deriving from, the iteration process, then our focus moves from static objects and a timeless Platonistic realm to the ordinary world we all inhabit, of processes or actions which may (or may not) be repeated.
Interestingly, not only the Latin word semel (= once) but also the English word 'same' derives ultimately from the Proto-Indo-European *sem.
It's odd to think of certain modern English expressions as having such an ancient lineage.
"Same again," for example. (Licensing the re-execution of a previous order, a particular drink at a bar, say, and thus requiring the barman to go through roughly the same motions twice (or thrice...).)
This way of conceptualizing number is quite as natural as counting apples or oranges, and may, as I suggested, provide a good basis for a non-Platonistic and altogether more satisfactory way of understanding mathematics.
Modern mathematical Platonism is a long way from Plato, but it shares with Plato a static view of (mathematical) reality. It is at odds not only with the dynamic character of ordinary life and experience but also with the new ways of looking at things which the digital revolution of the last century has encouraged.
(I am currently looking at how the ideas of one of the great twentieth-century logicians, Alonzo Church, may relate to this notion of number as iteration and to mathematical Platonism. More later, perhaps. It's heavy stuff and may not be worth the trouble!)
Tuesday, August 28, 2012
Science, philosophy, ideology
Previously I have discussed* the implications of studies which indicate that a person's basic political (and religious) orientation is influenced greatly by genetic and early developmental factors. We generally only engage in political or religious debate because we have strong ideological or religious convictions, and those convictions set the general tone and direction of our contributions. Rationality comes in only later - to help us elaborate and defend that general position which feels so true to us (but strangely not to our antagonists).
These facts (as I take them to be) are rather inconvenient. It takes all the fun out of argument if one feels obliged to be skeptical towards one's own deeply felt convictions!
But on the plus side, it allows one (I believe) better to understand what is really going in much ideological, religious and philosophical debate.
In my previous post on this site, I touched on these issues, suggesting that platonists and anti-platonists in the philosophy of mathematics may be caught up in a debate which is superficially rational but ultimately driven by non-rational factors - deep convictions similar to religious or political convictions.
If progress is to be made in any of these areas, I think there has to be an acceptance that we are less rational than we would like to think; and so we need to depend more on scientific methods (which incorporate mechanisms to counter individual biases etc.), and less on convictions (or the elaborate arguments which we have built upon them).
A boring conclusion, I know. Especially for those of us who have strong convictions and a taste for argument and debate about the big questions.
Within the (rather ill-defined) area of philosophy, history certainly seems to indicate that arguments and debates are most fruitful (albeit somewhat constrained) when the dividing line between philosophy and science is blurred or non-existent, and most pointless and futile whenever philosophy is disengaged from science.
I recognize, however, that we are inveterately ideological creatures**, and there will always be a role for those who can identify, articulate and criticize the ideological frameworks we inevitably create and seek to live by.
Could this be what will replace the bits of philosophy which are not swallowed up by the various sciences: the scientifically-informed critique of ideologies?
* For example, here and here.
** There are problems with the term 'ideology', I know. I am using it in a very broad sense to mean something like a system of beliefs involving values and prompting certain forms of action, often in concert with others who share the ideology and sometimes in opposition to those who don't. It may be that I would do better to speak of us being inveterately tribal. 'Ideology' may be just the intellectual's (way of rationalizing) tribalism.
These facts (as I take them to be) are rather inconvenient. It takes all the fun out of argument if one feels obliged to be skeptical towards one's own deeply felt convictions!
But on the plus side, it allows one (I believe) better to understand what is really going in much ideological, religious and philosophical debate.
In my previous post on this site, I touched on these issues, suggesting that platonists and anti-platonists in the philosophy of mathematics may be caught up in a debate which is superficially rational but ultimately driven by non-rational factors - deep convictions similar to religious or political convictions.
If progress is to be made in any of these areas, I think there has to be an acceptance that we are less rational than we would like to think; and so we need to depend more on scientific methods (which incorporate mechanisms to counter individual biases etc.), and less on convictions (or the elaborate arguments which we have built upon them).
A boring conclusion, I know. Especially for those of us who have strong convictions and a taste for argument and debate about the big questions.
Within the (rather ill-defined) area of philosophy, history certainly seems to indicate that arguments and debates are most fruitful (albeit somewhat constrained) when the dividing line between philosophy and science is blurred or non-existent, and most pointless and futile whenever philosophy is disengaged from science.
I recognize, however, that we are inveterately ideological creatures**, and there will always be a role for those who can identify, articulate and criticize the ideological frameworks we inevitably create and seek to live by.
Could this be what will replace the bits of philosophy which are not swallowed up by the various sciences: the scientifically-informed critique of ideologies?
* For example, here and here.
** There are problems with the term 'ideology', I know. I am using it in a very broad sense to mean something like a system of beliefs involving values and prompting certain forms of action, often in concert with others who share the ideology and sometimes in opposition to those who don't. It may be that I would do better to speak of us being inveterately tribal. 'Ideology' may be just the intellectual's (way of rationalizing) tribalism.
Sunday, August 19, 2012
Timothy Gowers and the philosophy of mathematics
Following on from my previous post, here are a few more thoughts on the philosophy of mathematics and on Timothy Gowers's views.
On rereading his talk, I thought the section on what it could mean for 2+2 to equal 5 in an alien world to be too clever by half and ultimately unconvincing. (Gowers virtually concedes this himself, so why include the material in a short talk?) It reminded me of Wittgenstein's equally unconvincing (to me) arguments about deviant forms of counting.
And then there is Gowers's politics, including his role in the campaign against the scientific publisher, Elsevier. (Wittgenstein too had strong moral and social convictions - including a conviction that ideas should not be 'owned' - but, unlike Gowers, he was, so far as I know, never an activist.) I am not aware of Gowers speaking anywhere of his general ideological or political position. I would guess that his views are left-leaning, but I don't know for sure, and I don't know what effect (if any) his general political and moral views might have had on his views on the philosophy of mathematics.
Which brings me back to the main point of the previous post: there seems to be no objective way of deciding on the truth or otherwise of the many and various options within the philosophy of mathematics. So when someone shows a strong commitment to a particular view, I wonder whether extraneous factors - such as ideology - might be playing a role.
I should also say something about neo-Meinongianism, having raised the topic in my previous post. Needless to say, it's complicated, but the general gist of it is something like this. Neo-Meinongians think that you can make statements about mathematical objects (like numbers) without being committed to believing they exist. They would claim that the statement that there are infinitely many prime numbers is literally true even though numbers may not exist as such. This view is based on the (I think) plausible idea that expressions like 'there is' or 'there are' are used in various ways and do not necessarily entail any ontological commitment.
As I understand it, (non-neo) Meinongianism is supposed to countenance gradations or different kinds of existence or being, whereas neo-Meinongians claim that certain uses of expressions such as 'there is' involve no ontological commitment.
In fact, Gowers makes a somewhat neo-Meinongian point in his talk, endorsing Rudolf Carnap's distinction between internal and external questions, only the latter (possibly) involving ontological commitments. So, on this view 'there are two senses of the phrase "there exists". One is the sense in which it is used in ordinary mathematical discourse - if I say that there are infinitely many primes I merely mean that the normal rules for proving mathematical statements license me to use appropriate quantifiers. The other is the more philosophical sense, the idea that those infinitely many primes "actually exist out there". These are the internal and external uses respectively.'
Or again, Gowers writes: 'One view, which I do not share, is that at least some ontological commitment is implicit in mathematical language.'
I'm not sure where I stand on all this. I have doubts about the worthwhileness of much of what goes on under the designation of 'philosophy of mathematics', but see some issues which seem real and important. Though I am certainly drawn to the position outlined by Gowers, I am acutely aware that this general orientation may be, like general religious or political predispositions, largely a function of inherited or early-environmental factors.
It seems to me that platonistic or anti-platonistic intuitions lie behind most philosophical work in the area. But what are these intuitions worth if they are in large part the result of arbitrary genetic and developmental factors?
The fact that little progress has been made in answering apparently real and interesting questions in the philosophy of mathematics and related areas suggests to me that the standard, traditional philosophical approaches are somehow flawed. Or perhaps the questions are ill-conceived, based on an inadequate understanding of the intellectual disciplines in question.
In fact, the advent of digital computers and new ways of conceptualizing information and information processing is changing the way we see mathematics (and much else besides). As new ways of doing and looking at mathematics and science emerge, questions that once seemed meaningful and important may no longer seem so.
On rereading his talk, I thought the section on what it could mean for 2+2 to equal 5 in an alien world to be too clever by half and ultimately unconvincing. (Gowers virtually concedes this himself, so why include the material in a short talk?) It reminded me of Wittgenstein's equally unconvincing (to me) arguments about deviant forms of counting.
And then there is Gowers's politics, including his role in the campaign against the scientific publisher, Elsevier. (Wittgenstein too had strong moral and social convictions - including a conviction that ideas should not be 'owned' - but, unlike Gowers, he was, so far as I know, never an activist.) I am not aware of Gowers speaking anywhere of his general ideological or political position. I would guess that his views are left-leaning, but I don't know for sure, and I don't know what effect (if any) his general political and moral views might have had on his views on the philosophy of mathematics.
Which brings me back to the main point of the previous post: there seems to be no objective way of deciding on the truth or otherwise of the many and various options within the philosophy of mathematics. So when someone shows a strong commitment to a particular view, I wonder whether extraneous factors - such as ideology - might be playing a role.
I should also say something about neo-Meinongianism, having raised the topic in my previous post. Needless to say, it's complicated, but the general gist of it is something like this. Neo-Meinongians think that you can make statements about mathematical objects (like numbers) without being committed to believing they exist. They would claim that the statement that there are infinitely many prime numbers is literally true even though numbers may not exist as such. This view is based on the (I think) plausible idea that expressions like 'there is' or 'there are' are used in various ways and do not necessarily entail any ontological commitment.
As I understand it, (non-neo) Meinongianism is supposed to countenance gradations or different kinds of existence or being, whereas neo-Meinongians claim that certain uses of expressions such as 'there is' involve no ontological commitment.
In fact, Gowers makes a somewhat neo-Meinongian point in his talk, endorsing Rudolf Carnap's distinction between internal and external questions, only the latter (possibly) involving ontological commitments. So, on this view 'there are two senses of the phrase "there exists". One is the sense in which it is used in ordinary mathematical discourse - if I say that there are infinitely many primes I merely mean that the normal rules for proving mathematical statements license me to use appropriate quantifiers. The other is the more philosophical sense, the idea that those infinitely many primes "actually exist out there". These are the internal and external uses respectively.'
Or again, Gowers writes: 'One view, which I do not share, is that at least some ontological commitment is implicit in mathematical language.'
I'm not sure where I stand on all this. I have doubts about the worthwhileness of much of what goes on under the designation of 'philosophy of mathematics', but see some issues which seem real and important. Though I am certainly drawn to the position outlined by Gowers, I am acutely aware that this general orientation may be, like general religious or political predispositions, largely a function of inherited or early-environmental factors.
It seems to me that platonistic or anti-platonistic intuitions lie behind most philosophical work in the area. But what are these intuitions worth if they are in large part the result of arbitrary genetic and developmental factors?
The fact that little progress has been made in answering apparently real and interesting questions in the philosophy of mathematics and related areas suggests to me that the standard, traditional philosophical approaches are somehow flawed. Or perhaps the questions are ill-conceived, based on an inadequate understanding of the intellectual disciplines in question.
In fact, the advent of digital computers and new ways of conceptualizing information and information processing is changing the way we see mathematics (and much else besides). As new ways of doing and looking at mathematics and science emerge, questions that once seemed meaningful and important may no longer seem so.
Tuesday, August 14, 2012
A strange form of amusement
If you look up an encyclopedia entry on the philosophy of mathematics you will usually find yourself presented with a list of competing approaches dating back to Plato. For someone of my temperament this is unsatisfactory. Well, it's fine having competing views, but which one (if any of them) is true?
With areas like ethics or art, the fact that there is no consensus may be explained by the very real possibility that these areas are largely subjective. But mathematics? Surely there is something that mathematics is, some broad understanding at any rate that we can agree on?
In the philosophy of mathematics there is a basic division between realists (or platonists) who believe that mathematical objects exist (but not in time and space); and anti-realists who don't believe this (seeing mathematics simply as a human activity, for instance).
Most mathematicians are thought to embrace some form of realism (mathematical truths are 'out there' to be discovered); but not all do. The distinguished mathematician Timothy Gowers is an anti-realist (aligning himself with Wittgenstein in this matter).
This basic division is just the beginning, however, as there is (as in just about any area in which philosophers are involved) a proliferation of arguments and counter-arguments resulting in an ever-proliferating list of divisions and subdivisions and so of positions to attack or defend.
Amongst which is the gloriously named neo-Meinongianism. (Could anything so called actually be true?)
A part of me says: 'Steer clear of all this, my dear fellow. Life is too short. And it's not about what it seems to be about. In part it's merely a perverse, self-perpetuating amusement for philosophers, in part an attempt by serious, religiously-inclined thinkers to defend a metaphysico-religious (or should that be religio-metaphysical?) view of the world.'
Unfair, no doubt. What of the serious, non-religious and/or anti-platonistic participants? Like Timothy Gowers.
In fact Gowers has himself asked the question of whether mathematics needs a philosophy, and I find his thoughts on the matter very persuasive.
I will probably not be spending a lot of time researching this area, but I would like to follow up on Gowers's views.*
And also, I intend to have a closer look at neo-Meinongianism. Why not?
* Gowers is a very political (and influential) figure within the mathematical and broader intellectual community and seems to have some fairly radical views about the ownership and distribution of ideas.
With areas like ethics or art, the fact that there is no consensus may be explained by the very real possibility that these areas are largely subjective. But mathematics? Surely there is something that mathematics is, some broad understanding at any rate that we can agree on?
In the philosophy of mathematics there is a basic division between realists (or platonists) who believe that mathematical objects exist (but not in time and space); and anti-realists who don't believe this (seeing mathematics simply as a human activity, for instance).
Most mathematicians are thought to embrace some form of realism (mathematical truths are 'out there' to be discovered); but not all do. The distinguished mathematician Timothy Gowers is an anti-realist (aligning himself with Wittgenstein in this matter).
This basic division is just the beginning, however, as there is (as in just about any area in which philosophers are involved) a proliferation of arguments and counter-arguments resulting in an ever-proliferating list of divisions and subdivisions and so of positions to attack or defend.
Amongst which is the gloriously named neo-Meinongianism. (Could anything so called actually be true?)
A part of me says: 'Steer clear of all this, my dear fellow. Life is too short. And it's not about what it seems to be about. In part it's merely a perverse, self-perpetuating amusement for philosophers, in part an attempt by serious, religiously-inclined thinkers to defend a metaphysico-religious (or should that be religio-metaphysical?) view of the world.'
Unfair, no doubt. What of the serious, non-religious and/or anti-platonistic participants? Like Timothy Gowers.
In fact Gowers has himself asked the question of whether mathematics needs a philosophy, and I find his thoughts on the matter very persuasive.
I will probably not be spending a lot of time researching this area, but I would like to follow up on Gowers's views.*
And also, I intend to have a closer look at neo-Meinongianism. Why not?
* Gowers is a very political (and influential) figure within the mathematical and broader intellectual community and seems to have some fairly radical views about the ownership and distribution of ideas.
Sunday, July 29, 2012
Wittgenstein's anti-metaphysical stance
One thing I share with Ludwig Wittgenstein is a hostility towards metaphysics.
Henry Le Roy Finch argues* that the origin of metaphysics lies in the idea of identity, which he traces to Plato's conception of original or self-existing things, and to Aristotle's (and the Aristotelian tradition's) more systematically logical approach to the notion of self-identity.
That a thing is identical with itself (traditionally referred to as Aristotle's first law of thought) is often seen as the foundation for all logic.
Wittgenstein, on the other hand, thought that there was no more meaningless statement than a statement of self-identity. "To say of one thing that it is identical with itself is to say nothing at all." (Tractatus 5.5303)
Quite.
But Finch goes further and suggests that this skepticism about self-identity is linked to Wittgenstein's rejection of the popular notion of personal identity, the Cartesian thinking self. This is a central theme of Wittgenstein's (subsequently taken up by Gilbert Ryle). As Wittgenstein put it: "There is no such thing as the subject that thinks or entertains ideas." (Tractatus 5.631)
Certainly, both this claim and the previously-cited one are anti-metaphysical. And, significantly, Wittgenstein saw metaphysics and religion (or at least the sort of religion he embraced) as being in opposition to one another rather than as allies.
Finch rightly points out that "identitylessness" is at the heart of some important religious traditions, notably Buddhism, certain forms of Christianity** and Islam in its Sufi aspect.
Wittgenstein said that his goal was to "show the fly the way out of the fly-bottle", which can be interpreted as referring to the freeing of the human being from his or her false self-perception as a thinking self in its own private world. And this view of freedom is quite consistent with the religious traditions listed above.
On the other hand, as Finch points out, it is not consistent with other religious and philosophical approaches:
"The Stoic (and some would say also Judaic) idea of freedom is essentially that of Kant, which is that of the ethical self or free will, in which the self still retains its identity through its capacity to decide."
I remain uncomfortable with religious language and concepts, but I don't think someone like Wittgenstein can be understood (and I think he is worth trying to understand) if one ignores the implicit religious dimensions of his thought.
Also, having grown up (and so having invested a lot) in a religious tradition which I subsequently rejected, it's satisfying to see elements of that tradition coming into play here in a positive way.
* See his Wittgenstein, published by Element Books as part of the series Masters of Philosophy.
** I would single out the tradition known as fideism, and also the various mystical traditions. Pauline themes are important here; and it is worth noting in this connection that Wittgenstein liked the writings of the twentieth-century theologian Karl Barth.
Henry Le Roy Finch argues* that the origin of metaphysics lies in the idea of identity, which he traces to Plato's conception of original or self-existing things, and to Aristotle's (and the Aristotelian tradition's) more systematically logical approach to the notion of self-identity.
That a thing is identical with itself (traditionally referred to as Aristotle's first law of thought) is often seen as the foundation for all logic.
Wittgenstein, on the other hand, thought that there was no more meaningless statement than a statement of self-identity. "To say of one thing that it is identical with itself is to say nothing at all." (Tractatus 5.5303)
Quite.
But Finch goes further and suggests that this skepticism about self-identity is linked to Wittgenstein's rejection of the popular notion of personal identity, the Cartesian thinking self. This is a central theme of Wittgenstein's (subsequently taken up by Gilbert Ryle). As Wittgenstein put it: "There is no such thing as the subject that thinks or entertains ideas." (Tractatus 5.631)
Certainly, both this claim and the previously-cited one are anti-metaphysical. And, significantly, Wittgenstein saw metaphysics and religion (or at least the sort of religion he embraced) as being in opposition to one another rather than as allies.
Finch rightly points out that "identitylessness" is at the heart of some important religious traditions, notably Buddhism, certain forms of Christianity** and Islam in its Sufi aspect.
Wittgenstein said that his goal was to "show the fly the way out of the fly-bottle", which can be interpreted as referring to the freeing of the human being from his or her false self-perception as a thinking self in its own private world. And this view of freedom is quite consistent with the religious traditions listed above.
On the other hand, as Finch points out, it is not consistent with other religious and philosophical approaches:
"The Stoic (and some would say also Judaic) idea of freedom is essentially that of Kant, which is that of the ethical self or free will, in which the self still retains its identity through its capacity to decide."
I remain uncomfortable with religious language and concepts, but I don't think someone like Wittgenstein can be understood (and I think he is worth trying to understand) if one ignores the implicit religious dimensions of his thought.
Also, having grown up (and so having invested a lot) in a religious tradition which I subsequently rejected, it's satisfying to see elements of that tradition coming into play here in a positive way.
* See his Wittgenstein, published by Element Books as part of the series Masters of Philosophy.
** I would single out the tradition known as fideism, and also the various mystical traditions. Pauline themes are important here; and it is worth noting in this connection that Wittgenstein liked the writings of the twentieth-century theologian Karl Barth.
Labels:
freedom,
Henry Le Roy Finch,
identity,
logic,
Ludwig Wittgenstein,
metaphysics,
religion,
self
Saturday, July 21, 2012
Thinking about Vienna
In his later years, Ludwig Wittgenstein had many insightful and salutary things to say - about language especially. He had freed himself from a rigorous but narrow view of logic and language, and thought his way back to what looks like a very sane and sensible and quite ordinary point of view which respects the fact that human language is embedded in human life in all its forms and activities, and reflects this variety. There is nothing metaphysical about language and meaning, no mystery (though many philosophers continue to operate as if there were*).
But there is another side of Wittgenstein which I find less appealing: his negative attitude towards science, his tendency to play the sage, and his religion.
He was, I think, very close to Tolstoy in his religious views, and very much a Christian. He gave away his share of the family fortune (and in so doing incurred the lifelong enmity of his brother Paul). He prayed. He read the New Testament.
I say that Wittgenstein played the sage. He did so in his writings, which often have an oracular tone, but also in life (as a teacher, etc.). He was a notorious philosophical head-clutcher.
And, as befits a sage, Wittgenstein had and has disciples. Philosophical Wittgensteinians often play down the religious dimension of his thought, but this is not the case with Henry Le Roy Finch, who, having completed a PhD at Columbia, taught philosophy for more than forty years, mainly at Sarah Lawrence College and CCNY (later CUNY).
I am currently reading a short work of Finch's in which he presents Wittgenstein and Heidegger as harbingers of an epochal change in Western civilization.
"We may not expect the change, which seems to be seeping in from many directions, to be forecast or presaged by any one particular philosopher or prophet. However, the thinker who is attuned to his or her own time as well as to deeper currents may pick up the seismic tremors well before others do and express some critical formative ideas in advance of the more general historical changes. Such a thinker, in the opinion of many, is Ludwig Wittgenstein (1889-1951)... [He] was a thinker of such originality that no one claimed to understand him fully in his lifetime and the attempt to comprehend his 'new way of looking at things' and make it available to the mainstream continues."
For many of Wittgenstein's followers - and especially, I would say, for the non-philosophers among them - his forbiddingly complex and beautifully written oeuvre represents a profound and sophisticated defense of the (or a) religious point of view.
And, because he didn't make explicit religious claims ('Whereof one cannot speak ...'), it is difficult to argue against his position.
I try to keep an open mind on these issues, but I do tend to the view that Wittgenstein's religious propensities are inextricably bound up with some very peculiar psychological imperatives and with his family and cultural background. His culture and most of his preoccupations are alien to us today. He was a member of one of the richest and most highly cultured Viennese families and grew up in the declining years of the once-glorious Austro-Hungarian Empire. The Romantic cult of death was in the air.
I have my doubts also about his followers. Finch sounds at times like a bit of an oddball. In an endnote on the discarding of "age-old machinelike aspects of the human mind", he mentions favorably the religious thinker Eric Gutkind (whom I have not read), and recalls attending in New York in October 1949 a talk by the architect Frank Lloyd Wright.
"He came out on the stage and began his lecture with these words, which are imprinted on my memory: 'The greatest man of our time has died today, and probably none of you has ever heard of him.' It was Gurdjieff ..."
If old Vienna was a weird and alien world, the bohemian milieu of mid-twentieth century New York might have given it a run for its money.
* Saul Kripke's work was very influential in re-mystifying the philosophy of language.
But there is another side of Wittgenstein which I find less appealing: his negative attitude towards science, his tendency to play the sage, and his religion.
He was, I think, very close to Tolstoy in his religious views, and very much a Christian. He gave away his share of the family fortune (and in so doing incurred the lifelong enmity of his brother Paul). He prayed. He read the New Testament.
I say that Wittgenstein played the sage. He did so in his writings, which often have an oracular tone, but also in life (as a teacher, etc.). He was a notorious philosophical head-clutcher.
And, as befits a sage, Wittgenstein had and has disciples. Philosophical Wittgensteinians often play down the religious dimension of his thought, but this is not the case with Henry Le Roy Finch, who, having completed a PhD at Columbia, taught philosophy for more than forty years, mainly at Sarah Lawrence College and CCNY (later CUNY).
I am currently reading a short work of Finch's in which he presents Wittgenstein and Heidegger as harbingers of an epochal change in Western civilization.
"We may not expect the change, which seems to be seeping in from many directions, to be forecast or presaged by any one particular philosopher or prophet. However, the thinker who is attuned to his or her own time as well as to deeper currents may pick up the seismic tremors well before others do and express some critical formative ideas in advance of the more general historical changes. Such a thinker, in the opinion of many, is Ludwig Wittgenstein (1889-1951)... [He] was a thinker of such originality that no one claimed to understand him fully in his lifetime and the attempt to comprehend his 'new way of looking at things' and make it available to the mainstream continues."
For many of Wittgenstein's followers - and especially, I would say, for the non-philosophers among them - his forbiddingly complex and beautifully written oeuvre represents a profound and sophisticated defense of the (or a) religious point of view.
And, because he didn't make explicit religious claims ('Whereof one cannot speak ...'), it is difficult to argue against his position.
I try to keep an open mind on these issues, but I do tend to the view that Wittgenstein's religious propensities are inextricably bound up with some very peculiar psychological imperatives and with his family and cultural background. His culture and most of his preoccupations are alien to us today. He was a member of one of the richest and most highly cultured Viennese families and grew up in the declining years of the once-glorious Austro-Hungarian Empire. The Romantic cult of death was in the air.
I have my doubts also about his followers. Finch sounds at times like a bit of an oddball. In an endnote on the discarding of "age-old machinelike aspects of the human mind", he mentions favorably the religious thinker Eric Gutkind (whom I have not read), and recalls attending in New York in October 1949 a talk by the architect Frank Lloyd Wright.
"He came out on the stage and began his lecture with these words, which are imprinted on my memory: 'The greatest man of our time has died today, and probably none of you has ever heard of him.' It was Gurdjieff ..."
If old Vienna was a weird and alien world, the bohemian milieu of mid-twentieth century New York might have given it a run for its money.
* Saul Kripke's work was very influential in re-mystifying the philosophy of language.
Sunday, July 8, 2012
How we ought to think
'One of the great pleasures of the philosopher's life,' wrote Jim Hankinson in The Bluffer's Guide to Philosophy, 'is being able to tell everyone (and not just children and dogs) what they ought to do. This is Ethics.'
On this reckoning, logic should afford even greater pleasure to its practitioners than ethics does insofar as it purports - at least on some accounts - to tell everyone how they ought to think. For example, consider this (from a text book for undergraduates): 'Logic is sometimes said to be the science of reasoning, but that assertion is somewhat misleading. Logic is not the empirical investigation of people's reasoning processes or the products of such processes. If it can be called a science at all, it is a normative science - it tells us what we ought to do, not what we do do.'
Or, as Gottlob Frege put it: 'the laws of logic are ... the most general laws, which prescribe universally the way in which one ought to think if one is to think at all.' (The Basic Laws of Arithmetic)
Frege, in fact, was something of a proto-fascist, and the above statement could be interpreted as having an authoritarian, even totalitarian, tenor. It could also be interpreted simply as an honest statement of the constraints of thought, reflecting Frege's noble goal of defining the bedrock of human reasoning.
It's no surprise that most attempts to articulate logic's normative role run into trouble. For what authority can the logician appeal to?
Formal logical systems are often seen as part of an attempt to systematize thinking, to improve (as it were) on ordinary thinking and the ordinary language on which it depends. And it is certainly true that ordinary language often deceives us and obscures the underlying logic (or structure) of an argument. Translating an argument into a formal language can reduce ambiguity, but those who have sought through the study of formal logical systems to illuminate the laws of thought or their foundations have been disappointed. Doubts surround not only the putative authority of a logical system but the very meaning of its symbols.
Technically, the meaning of what Rudolf Carnap called the fundamental mathematico-logical symbols (now usually called logical constants) derives from the explicit rules we lay down for their use, but in fact the question of their meaning remains obscure. One thing is clear: the whole exercise is paradoxically dependent on a prior understanding of the basic logical operations. Ordinary language use is also predicated on such an understanding: anyone lacking it would not be able to use language in anything like a normal way.
The work of Frege and his successors led, of course, to the development of digital computers in the mid-twentieth century, and in this sense it was spectacularly fruitful and successful. But it has not really led to a new understanding of human reasoning or established clear guidelines - as Frege hoped - for how we ought to think.
In fact, the attempt to create formal systems which can do what natural language can do has led to a renewed appreciation of the complexity, power, elegance and logical depth of the latter. Wittgenstein was right to warn against thinking of our everyday language as only approximating to something better, to some ideal language or calculus.
We need formal systems for dealing with mathematics and science and technology, but, as far as the fundamentals of logic are concerned, it's all there - implicitly at least - in the language of a five-year-old child.
On this reckoning, logic should afford even greater pleasure to its practitioners than ethics does insofar as it purports - at least on some accounts - to tell everyone how they ought to think. For example, consider this (from a text book for undergraduates): 'Logic is sometimes said to be the science of reasoning, but that assertion is somewhat misleading. Logic is not the empirical investigation of people's reasoning processes or the products of such processes. If it can be called a science at all, it is a normative science - it tells us what we ought to do, not what we do do.'
Or, as Gottlob Frege put it: 'the laws of logic are ... the most general laws, which prescribe universally the way in which one ought to think if one is to think at all.' (The Basic Laws of Arithmetic)
Frege, in fact, was something of a proto-fascist, and the above statement could be interpreted as having an authoritarian, even totalitarian, tenor. It could also be interpreted simply as an honest statement of the constraints of thought, reflecting Frege's noble goal of defining the bedrock of human reasoning.
It's no surprise that most attempts to articulate logic's normative role run into trouble. For what authority can the logician appeal to?
Formal logical systems are often seen as part of an attempt to systematize thinking, to improve (as it were) on ordinary thinking and the ordinary language on which it depends. And it is certainly true that ordinary language often deceives us and obscures the underlying logic (or structure) of an argument. Translating an argument into a formal language can reduce ambiguity, but those who have sought through the study of formal logical systems to illuminate the laws of thought or their foundations have been disappointed. Doubts surround not only the putative authority of a logical system but the very meaning of its symbols.
Technically, the meaning of what Rudolf Carnap called the fundamental mathematico-logical symbols (now usually called logical constants) derives from the explicit rules we lay down for their use, but in fact the question of their meaning remains obscure. One thing is clear: the whole exercise is paradoxically dependent on a prior understanding of the basic logical operations. Ordinary language use is also predicated on such an understanding: anyone lacking it would not be able to use language in anything like a normal way.
The work of Frege and his successors led, of course, to the development of digital computers in the mid-twentieth century, and in this sense it was spectacularly fruitful and successful. But it has not really led to a new understanding of human reasoning or established clear guidelines - as Frege hoped - for how we ought to think.
In fact, the attempt to create formal systems which can do what natural language can do has led to a renewed appreciation of the complexity, power, elegance and logical depth of the latter. Wittgenstein was right to warn against thinking of our everyday language as only approximating to something better, to some ideal language or calculus.
We need formal systems for dealing with mathematics and science and technology, but, as far as the fundamentals of logic are concerned, it's all there - implicitly at least - in the language of a five-year-old child.
Sunday, July 1, 2012
The wider significance of Gödel's Incompleteness Theorem
Torkel Franzén [1950-2006] devoted a lot of time and energy to playing down the wider significance of Gödel's Incompleteness Theorem.* And there is no doubt that a great many extravagant claims about its significance have been made, usually along the lines that Gödel has demonstrated some fatal limitation in what scientific research can achieve, and an equal and opposite conclusion about human spirituality and artistic creativity.
I don't want to suggest that all claims for the general (as distinct from the mathematical and logical) significance of Gödel's work are mistaken: there is genuine disagreement between very knowledgeable people on the matter. But Franzén's position is widely respected as being scrupulously rigorous and based on a thorough understanding of the logical and mathematical concepts involved.
I personally found his views refreshingly straightforward when I came across them a couple of years ago. I guess I had read one too many of those contentious claims and wanted to develop a better understanding before looking again at more general questions.
Let me say, however, that I think there is abiding interest in Gödel's results, for example, in the contrast between formal systems like first-order logic (which he proved to be complete**) and stronger systems like the one outlined in Russell and Whitehead's Principia Mathematica (which he proved to be incomplete). Franzén points out that 'the incompleteness of any sufficiently strong consistent axiomatic theory ... concerns only what may be called the arithmetical component of the theory. A formal system has such a component if it is possible to interpret some of its statements as statements about the natural numbers, in such a way that the system proves some of the basic principles of arithmetic.'
We know that the natural numbers have surprising (not to say mysterious) properties, and I am tempted to say that the gulf between the first-order predicate calculus and stronger systems which Gödel's completeness and incompleteness theorems establish underscores an informal distinction between the pedestrian logic of commonsense and everyday life (which holds no surprises, life's surprises arising from complex concatenations of events rather than from our naive analyses thereof) and mathematical thinking.
Be that as it may, Gödel's work has another, perhaps more solid, claim to significance which flows from the discovery that a class of functions (recursive) which Gödel defined in the course of elaborating his famous proof turns out to be equivalent to some apparently quite different concepts developed in subsequent years (in particular by Alonzo Church, Alan Turing and Emil Post). David Berlinski writes:
'The idea of an algorithm had been resident in the consciousness of the world's mathematicians at least since the seventeenth century; and now, in the third [sic***] decade of the twentieth century, an idea lacking precise explication was endowed with four different definitions, rather as if an attractive but odd woman were to receive four different proposals of marriage where previously she had received none. The four quite different definitions ... were provided by Gödel, Church, Turing and Post. Gödel had written of a certain class of functions; Church of a calculus of conversion; and Turing and Post had both imagined machines capable of manipulating symbols drawn from a finite alphabet. What gives this story its dramatic unity is the fact that by the end of the decade it had become clear to the small coterie of competent logicians that the definitions were, in fact, equivalent in the sense that they defined one concept by means of four verbal flourishes. Gödel's recursive functions were precisely those functions that could be realized by lambda-conversion; and the operations performed by those functions were precisely those that could be executed by a Turing machine or a Post machine. These equivalencies, logicians were able first to imagine and then to demonstrate.
... A concept indifferent to the details of its formulation, Gödel asserted, is absolute. And in commenting on the concept to an audience of logicians, he remarked that the fact that only one concept had emerged from four definitions was something of an epistemological "miracle".' ****
I don't know about a miracle, but the equivalence of the various definitions is certainly suggestive that the underlying concept has a certain robustness and depth.
* This article (pdf), written just before his untimely death, gives a concise statement of his point of view.
** A set of axioms is complete if for any statement in the axioms' language either that statement or its negation is provable from the axioms.
*** This is a strange error for a mathematician to make - we're talking of the 1930s!
**** The Advent of the Algorithm (Harcourt 2001), pp. 205-6.
I don't want to suggest that all claims for the general (as distinct from the mathematical and logical) significance of Gödel's work are mistaken: there is genuine disagreement between very knowledgeable people on the matter. But Franzén's position is widely respected as being scrupulously rigorous and based on a thorough understanding of the logical and mathematical concepts involved.
I personally found his views refreshingly straightforward when I came across them a couple of years ago. I guess I had read one too many of those contentious claims and wanted to develop a better understanding before looking again at more general questions.
Let me say, however, that I think there is abiding interest in Gödel's results, for example, in the contrast between formal systems like first-order logic (which he proved to be complete**) and stronger systems like the one outlined in Russell and Whitehead's Principia Mathematica (which he proved to be incomplete). Franzén points out that 'the incompleteness of any sufficiently strong consistent axiomatic theory ... concerns only what may be called the arithmetical component of the theory. A formal system has such a component if it is possible to interpret some of its statements as statements about the natural numbers, in such a way that the system proves some of the basic principles of arithmetic.'
We know that the natural numbers have surprising (not to say mysterious) properties, and I am tempted to say that the gulf between the first-order predicate calculus and stronger systems which Gödel's completeness and incompleteness theorems establish underscores an informal distinction between the pedestrian logic of commonsense and everyday life (which holds no surprises, life's surprises arising from complex concatenations of events rather than from our naive analyses thereof) and mathematical thinking.
Be that as it may, Gödel's work has another, perhaps more solid, claim to significance which flows from the discovery that a class of functions (recursive) which Gödel defined in the course of elaborating his famous proof turns out to be equivalent to some apparently quite different concepts developed in subsequent years (in particular by Alonzo Church, Alan Turing and Emil Post). David Berlinski writes:
'The idea of an algorithm had been resident in the consciousness of the world's mathematicians at least since the seventeenth century; and now, in the third [sic***] decade of the twentieth century, an idea lacking precise explication was endowed with four different definitions, rather as if an attractive but odd woman were to receive four different proposals of marriage where previously she had received none. The four quite different definitions ... were provided by Gödel, Church, Turing and Post. Gödel had written of a certain class of functions; Church of a calculus of conversion; and Turing and Post had both imagined machines capable of manipulating symbols drawn from a finite alphabet. What gives this story its dramatic unity is the fact that by the end of the decade it had become clear to the small coterie of competent logicians that the definitions were, in fact, equivalent in the sense that they defined one concept by means of four verbal flourishes. Gödel's recursive functions were precisely those functions that could be realized by lambda-conversion; and the operations performed by those functions were precisely those that could be executed by a Turing machine or a Post machine. These equivalencies, logicians were able first to imagine and then to demonstrate.
... A concept indifferent to the details of its formulation, Gödel asserted, is absolute. And in commenting on the concept to an audience of logicians, he remarked that the fact that only one concept had emerged from four definitions was something of an epistemological "miracle".' ****
I don't know about a miracle, but the equivalence of the various definitions is certainly suggestive that the underlying concept has a certain robustness and depth.
* This article (pdf), written just before his untimely death, gives a concise statement of his point of view.
** A set of axioms is complete if for any statement in the axioms' language either that statement or its negation is provable from the axioms.
*** This is a strange error for a mathematician to make - we're talking of the 1930s!
**** The Advent of the Algorithm (Harcourt 2001), pp. 205-6.
Tuesday, June 19, 2012
Another case of magical thinking: Albert Einstein
Having recently noted that Kurt Gödel's general outlook (especially his conviction that everything happened for a reason) fits Matthew Hutson's notion of magical thinking, I want to suggest that Gödel's friend, Albert Einstein, may in fact have had very similar ideas.
The general view is that Einstein was a model of scientific objectivity, perhaps a little stubborn in following his scientific intuitions and perhaps a little naive politically, but in no way prone to superstition or to conventional modes of religious thinking. He talked about 'God' (or 'the old one') but made it clear that his God was nothing like the personal God of the Bible, but rather was an impersonal entity, like Spinoza's deus sive natura.
In his final years Einstein was very close to Gödel and the two spent many hours walking together and talking. Einstein said at one stage that he only went to his office at the Institute for Advanced Study so he could have the pleasure of walking home with Gödel. They were clearly on the same wavelength.
The best assessment I know of what Einstein believed is an essay by Gerald Holton. Holton a physicist with an intimate knowledge of Einstein's writings, including his correspondence argues that his beliefs in later life were deeply influenced by early religious experiences as well as by Spinoza's Ethics.
Einstein's commitment to determinism (and rejection of the indeterminism of quantum mechanics) is well known, but it is not generally thought that this conviction had a religious source. But Holton thinks it has, and his view is very plausible.
In fact, Einstein's determinism could be seen as having much in common with Gödel's idea that everything happens for a reason. And though Einstein didn't apply the principle which Matthew Hutson sees as a keynote of magical thinking to mundane events (as Gödel did), he believed in it no less passionately than his friend.
Admittedly, determinism has often been associated with a non-religious perspective, but one can see how even a scientifically-informed determinism might also be the expression of a broadly religious point of view. From the time of Augustine, a particular form of determinism was a powerful strand in Christian thinking, for example.
It is difficult to come to clear conclusions and I have some sympathy with the point of view of Karl Popper in this matter. Popper was actually very respectful of religion and was a Cartesian dualist (putting him clearly in Matthew Hutson's 'magical thinking' camp), but even he was put off by Einstein's theological modes of thought and expression. Holton writes:
Karl Popper remarked that in conversations with Einstein, "I learned nothing . . . . he tended to express things in theological terms, and this was often the only way to argue with him. I found it finally quite uninteresting."
The general view is that Einstein was a model of scientific objectivity, perhaps a little stubborn in following his scientific intuitions and perhaps a little naive politically, but in no way prone to superstition or to conventional modes of religious thinking. He talked about 'God' (or 'the old one') but made it clear that his God was nothing like the personal God of the Bible, but rather was an impersonal entity, like Spinoza's deus sive natura.
In his final years Einstein was very close to Gödel and the two spent many hours walking together and talking. Einstein said at one stage that he only went to his office at the Institute for Advanced Study so he could have the pleasure of walking home with Gödel. They were clearly on the same wavelength.
The best assessment I know of what Einstein believed is an essay by Gerald Holton. Holton a physicist with an intimate knowledge of Einstein's writings, including his correspondence argues that his beliefs in later life were deeply influenced by early religious experiences as well as by Spinoza's Ethics.
Einstein's commitment to determinism (and rejection of the indeterminism of quantum mechanics) is well known, but it is not generally thought that this conviction had a religious source. But Holton thinks it has, and his view is very plausible.
In fact, Einstein's determinism could be seen as having much in common with Gödel's idea that everything happens for a reason. And though Einstein didn't apply the principle which Matthew Hutson sees as a keynote of magical thinking to mundane events (as Gödel did), he believed in it no less passionately than his friend.
Admittedly, determinism has often been associated with a non-religious perspective, but one can see how even a scientifically-informed determinism might also be the expression of a broadly religious point of view. From the time of Augustine, a particular form of determinism was a powerful strand in Christian thinking, for example.
It is difficult to come to clear conclusions and I have some sympathy with the point of view of Karl Popper in this matter. Popper was actually very respectful of religion and was a Cartesian dualist (putting him clearly in Matthew Hutson's 'magical thinking' camp), but even he was put off by Einstein's theological modes of thought and expression. Holton writes:
Karl Popper remarked that in conversations with Einstein, "I learned nothing . . . . he tended to express things in theological terms, and this was often the only way to argue with him. I found it finally quite uninteresting."
Saturday, June 16, 2012
Gödel's magical mind
Kurt Gödel is one of the key figures in the intellectual history of the 20th century, but, like many people who are highly gifted in domains like logic and mathematics, he struggled to cope with mundane reality. He also had paranoid tendencies in his later years and suspected that people were trying to poison him. In the end he just stopped eating.
I was reminded of Gödel when I recently came across this paragraph in an article by Matthew Hutson about our deep-rooted tendency to think in terms of magical rather than scientific logic:
Another law of magic is “everything happens for a reason” — there is no such thing as randomness or happenstance. This is so-called teleological reasoning, which assumes intentions and goals behind even evidently purposeless entities like hurricanes. As social creatures, we may be biologically tuned to seek evidence of intentionality in the world, so that we can combat or collaborate with whoever did what’s been done. When lacking a visible author, we end up crediting an invisible one — God, karma, destiny, whatever.
Interestingly, Gödel took his teleological convictions as being simply and comprehensively true. His belief that everything happened for a reason led to some very odd conclusions and was a source of some amusement and no little concern to his friends.*
Gödel was a deeply religious man who believed in a spiritual realm and life after death. Hutson's article mentions our inability to accept - or even to conceive of in a deep sense - our own mortality as another example of magical thinking. (Gödel himself, as I recall, justified his own belief in an afterlife on teleological grounds.)
The paradox of this pioneer of mathematical logic being, in ordinary life, completely under the sway of magical thinking calls for some kind of explanation or comment. Was it that he sought to impose the strict and clear logic of his professional work onto a world which works in more complex (and random) ways? Was it that he put too much faith in the ability of his mind to intuit reality, seeing the mind as a spiritual thing rather than something arising from a bodily organ carrying the marks of a long evolutionary history?
Hutson's main point is that magical thinking is natural to us and can enhance our lives. On the other hand, he sees it (quite rightly I believe) as misrepresenting objective reality and as potentially dangerous. Gödel's case illustrates some of the dangers, but clearly he had specific psychiatric problems in his later years, and it would be simplistic to attempt some kind of comprehensive explanation of his fate as being occasioned by extreme teleological thinking or whatever.
Gödel remains a great thinker and was a man with many appealing qualities, not the least of which were gentleness and reticence. His later years, after the death of his best friend, Albert Einstein, were sad and ultimately tragic.
His religious - or magical - convictions were an integral part of the man and no doubt contributed to his greatness. And, one hopes, provided some comfort in the darkness of his final years.
* I recommend Rebecca Goldstein's concise and accessible account of his life and thought, Incompleteness: The Proof and Paradox of Kurt Gödel (Norton, 2005).
I was reminded of Gödel when I recently came across this paragraph in an article by Matthew Hutson about our deep-rooted tendency to think in terms of magical rather than scientific logic:
Another law of magic is “everything happens for a reason” — there is no such thing as randomness or happenstance. This is so-called teleological reasoning, which assumes intentions and goals behind even evidently purposeless entities like hurricanes. As social creatures, we may be biologically tuned to seek evidence of intentionality in the world, so that we can combat or collaborate with whoever did what’s been done. When lacking a visible author, we end up crediting an invisible one — God, karma, destiny, whatever.
Interestingly, Gödel took his teleological convictions as being simply and comprehensively true. His belief that everything happened for a reason led to some very odd conclusions and was a source of some amusement and no little concern to his friends.*
Gödel was a deeply religious man who believed in a spiritual realm and life after death. Hutson's article mentions our inability to accept - or even to conceive of in a deep sense - our own mortality as another example of magical thinking. (Gödel himself, as I recall, justified his own belief in an afterlife on teleological grounds.)
The paradox of this pioneer of mathematical logic being, in ordinary life, completely under the sway of magical thinking calls for some kind of explanation or comment. Was it that he sought to impose the strict and clear logic of his professional work onto a world which works in more complex (and random) ways? Was it that he put too much faith in the ability of his mind to intuit reality, seeing the mind as a spiritual thing rather than something arising from a bodily organ carrying the marks of a long evolutionary history?
Hutson's main point is that magical thinking is natural to us and can enhance our lives. On the other hand, he sees it (quite rightly I believe) as misrepresenting objective reality and as potentially dangerous. Gödel's case illustrates some of the dangers, but clearly he had specific psychiatric problems in his later years, and it would be simplistic to attempt some kind of comprehensive explanation of his fate as being occasioned by extreme teleological thinking or whatever.
Gödel remains a great thinker and was a man with many appealing qualities, not the least of which were gentleness and reticence. His later years, after the death of his best friend, Albert Einstein, were sad and ultimately tragic.
His religious - or magical - convictions were an integral part of the man and no doubt contributed to his greatness. And, one hopes, provided some comfort in the darkness of his final years.
* I recommend Rebecca Goldstein's concise and accessible account of his life and thought, Incompleteness: The Proof and Paradox of Kurt Gödel (Norton, 2005).
Labels:
Kurt Goedel,
logic,
Matthew Hutson,
religion,
superstition
Sunday, June 10, 2012
Matthew Hutson and magical thinking
Massimo Pigliucci recently wrote a critique of an article by Matthew Hutson in which Hutson previews his forthcoming book on superstition and magical thinking. Hutson checks in in the comments section to respond to Pigliucci's criticisms, and I am much impressed by what he has to say (and the charming way he says it).
In particular I am interested in his comments about our inability fully to comprehend our mortality.
Here is the paragraph in question:
[Y]ou ... take issue with my claim that "without [magical thinking], the existential angst of realizing we're just impermanent clusters of molecules with no ultimate purpose would overwhelm us." We cannot fully grasp our material, temporary nature. If you try to picture what it will be like to be dead, for example, you're still picturing something that it is like to be. Further, we are intuitively Cartesian dualists. And so we have this sense that our consciousness (or "soul") continues beyond death. Granted, no one can be sure how we would feel if we *could* fully grasp death, but there's plenty of research showing that we have strong defense mechanisms to deny our mortality--by believing we are creating transcendent meaning with our lives, for example. I see the denial of death as a form of magical thinking.
The pugnacious Pigliucci claims, by the way, that he can conceive of his future non-existence perfectly well! But I find Hutson's account both of how the brain works and of how we might reasonably deal with our ingrained irrationality to be more plausible than Pigliucci's.
In contrast to Pigliucci, Hutson sees value in 'magical thinking' on pragmatic grounds. But his pragmatism is not the semi-religious Pragmatism of (for example) William James but rather (it seems) just a recognition that our brains have certain quirks which, though irrational, can help us get through life more successfully, and simply recognizing this reality and going with the flow to some extent is not such a bad thing.
He seems to be quite as non-religious as Pigliucci, but has a more nuanced response to the irrational elements in our nature.
Hutson's general approach may point to a satisfactory way of answering some of the questions I have been addressing lately on this site.
I have been wanting to come to some sort of conclusion about whether there is any value in (the more sophisticated) religious points of view, and about the implications of limited knowledge. My default position is to reject all religious claims but, given the limitations of our scientific knowledge, it seems sensible to acknowledge that mysteries abound.
But can we, I wonder, make any progress at all in coming to terms with this realm of mystery? Are the sorts of approaches that, say, someone like Martin Heidegger made to questions of existence and being (taking inspiration from the pre-Socratics) of any value at all? Or is this sort of thing just self-indulgent, pseudo-religious rambling?
My provisional answer is that Heidegger was struggling with real and important issues like facing mortality but he got carried away with his own rhetoric and a belief in the power of his own intuition (his fanciful etymologies are a good example of this).
There are, of course, many styles of 'doing philosophy', but I think it safe to say that most philosophers place too much credence in the power of our unaided minds to see the truth of things.
I'm not sure that we need the likes of Heidegger or Sandel (to whom Pigliucci appeals) or Pigliucci himself. Too often, in my view, philosophers are driven by a hidden religious, semi-religious or political agenda.
In fairness, though, if that (say) religious agenda reflected important aspects of reality, then any philosophizing based upon it would have to be taken seriously.
But, in the absence (as I see it) of any good reason to accept any particular religious or moral-metaphysical doctrine or point of view, one must find knowledge and wisdom where one can.
And, fortunately, there is little doubt that the perspectives put forward by scientifically-grounded writers like Matthew Hutson can be very valuable in helping us resolve problems once deemed exclusively philosophical or religious.
In particular I am interested in his comments about our inability fully to comprehend our mortality.
Here is the paragraph in question:
[Y]ou ... take issue with my claim that "without [magical thinking], the existential angst of realizing we're just impermanent clusters of molecules with no ultimate purpose would overwhelm us." We cannot fully grasp our material, temporary nature. If you try to picture what it will be like to be dead, for example, you're still picturing something that it is like to be. Further, we are intuitively Cartesian dualists. And so we have this sense that our consciousness (or "soul") continues beyond death. Granted, no one can be sure how we would feel if we *could* fully grasp death, but there's plenty of research showing that we have strong defense mechanisms to deny our mortality--by believing we are creating transcendent meaning with our lives, for example. I see the denial of death as a form of magical thinking.
The pugnacious Pigliucci claims, by the way, that he can conceive of his future non-existence perfectly well! But I find Hutson's account both of how the brain works and of how we might reasonably deal with our ingrained irrationality to be more plausible than Pigliucci's.
In contrast to Pigliucci, Hutson sees value in 'magical thinking' on pragmatic grounds. But his pragmatism is not the semi-religious Pragmatism of (for example) William James but rather (it seems) just a recognition that our brains have certain quirks which, though irrational, can help us get through life more successfully, and simply recognizing this reality and going with the flow to some extent is not such a bad thing.
He seems to be quite as non-religious as Pigliucci, but has a more nuanced response to the irrational elements in our nature.
Hutson's general approach may point to a satisfactory way of answering some of the questions I have been addressing lately on this site.
I have been wanting to come to some sort of conclusion about whether there is any value in (the more sophisticated) religious points of view, and about the implications of limited knowledge. My default position is to reject all religious claims but, given the limitations of our scientific knowledge, it seems sensible to acknowledge that mysteries abound.
But can we, I wonder, make any progress at all in coming to terms with this realm of mystery? Are the sorts of approaches that, say, someone like Martin Heidegger made to questions of existence and being (taking inspiration from the pre-Socratics) of any value at all? Or is this sort of thing just self-indulgent, pseudo-religious rambling?
My provisional answer is that Heidegger was struggling with real and important issues like facing mortality but he got carried away with his own rhetoric and a belief in the power of his own intuition (his fanciful etymologies are a good example of this).
There are, of course, many styles of 'doing philosophy', but I think it safe to say that most philosophers place too much credence in the power of our unaided minds to see the truth of things.
I'm not sure that we need the likes of Heidegger or Sandel (to whom Pigliucci appeals) or Pigliucci himself. Too often, in my view, philosophers are driven by a hidden religious, semi-religious or political agenda.
In fairness, though, if that (say) religious agenda reflected important aspects of reality, then any philosophizing based upon it would have to be taken seriously.
But, in the absence (as I see it) of any good reason to accept any particular religious or moral-metaphysical doctrine or point of view, one must find knowledge and wisdom where one can.
And, fortunately, there is little doubt that the perspectives put forward by scientifically-grounded writers like Matthew Hutson can be very valuable in helping us resolve problems once deemed exclusively philosophical or religious.
Monday, June 4, 2012
What Berlinski believes
In my previous post I speculated on David Berlinski's fundamental beliefs, suggesting that his particular version of agnosticism incorporates elements of a religious view of the world. I know this sort of speculation is usually futile and inconclusive, but I have been somewhat beguiled by his authorial persona and feel the need to come to a personal conclusion about his - well, his seriousness. So here are a few more thoughts.
The problem is that this very clever, cultured, worldly and sophisticated writer (who has a philosophy PhD from Princeton and a strong background in mathematics and logic) has become a darling of the 'intelligent design' movement. There have been unsubstantiated accusations that he has written anti-Darwinian tracts and courted Christian conservatives for financial gain, and that he doesn't really believe much of what he writes in this area.
Who knows? It's a hard world and people have to earn a living, and it's a fact of contemporary life that high-minded scholarly values have lost their foothold, and a commitment to truth for its own sake is widely viewed (quite rightly perhaps) as being based on self-delusion.
Actually, though, Berlinski studiously avoids endorsing religious or 'intelligent design' explanations and restricts himself to criticizing standard scientific explanations.
Is he just being a professional contrarian or is he sincere? Not always very sincere, I would suggest.
It is clear, however, that when Berlinski writes on mathematics and logic he is writing from the heart. And it's also clear that he is a mathematical Platonist. Platonism is, of course, a very respectable (and quite common) position in the philosophy of mathematics.
Indications of a dualism of mind and matter are evident in Berlinski's writings which remind me of the views of Karl Popper (who openly espoused Cartesian dualism).
Popper also suggested at one time, like Berlinski, that the notion of natural selection was vacuous. But, to his credit, Popper changed his mind on this.
I am certainly uncomfortable with Berlinski's links with the 'intelligent design' movement. I don't think he does himself any credit by associating with, allowing himself to be used by and directly and indirectly profiting from those whose religious understanding is rather less sophisticated than his own.
I am not accusing him of intellectual dishonesty. My best guess however is that he is guilty of - how shall I put it? - a certain intellectual recklessness and love of debate for its own sake. Or perhaps he is driven in these matters merely by the pleasure of baiting certain notable atheists.
The obituary he wrote for Christopher Hitchens is a gem. The picture of the two of them - Hitchens gravely ill - having a cigarette and a philosophical chat on "a forlorn hotel loading ramp" in Birmingham, Alabama will stay with me.
Does Berlinski have a hidden religious agenda? He is a mysterian like Ludwig Wittgenstein (on whom he wrote his PhD thesis), like Popper, and like Roger Penrose (to whom he refers and whose general attitude to human consciousness he shares).
Martin Gardner also comes to mind in this context. Though he lacked the academic credentials of the others I have mentioned, he, like Berlinski, was a popularizer of mathematics and a professional skeptic who wrote for a living. He was also an arch-mysterian.
Some people want answers to the big questions, and feel unsatisfied and incomplete if plausible answers are not forthcoming. Others, and Berlinski is among them, don't really want to know at all. They are exhilarated by mystery. Berlinski once observed:
"I mean, deep down we all have a sense that the world is a more mysterious or stranger place not only than we imagine it, but than we can possibly imagine."
If I'm not mistaken, Berlinski's ultimate mystery is equivalent to what medieval thinkers called deus absconditus, the hidden God.
The problem is that this very clever, cultured, worldly and sophisticated writer (who has a philosophy PhD from Princeton and a strong background in mathematics and logic) has become a darling of the 'intelligent design' movement. There have been unsubstantiated accusations that he has written anti-Darwinian tracts and courted Christian conservatives for financial gain, and that he doesn't really believe much of what he writes in this area.
Who knows? It's a hard world and people have to earn a living, and it's a fact of contemporary life that high-minded scholarly values have lost their foothold, and a commitment to truth for its own sake is widely viewed (quite rightly perhaps) as being based on self-delusion.
Actually, though, Berlinski studiously avoids endorsing religious or 'intelligent design' explanations and restricts himself to criticizing standard scientific explanations.
Is he just being a professional contrarian or is he sincere? Not always very sincere, I would suggest.
It is clear, however, that when Berlinski writes on mathematics and logic he is writing from the heart. And it's also clear that he is a mathematical Platonist. Platonism is, of course, a very respectable (and quite common) position in the philosophy of mathematics.
Indications of a dualism of mind and matter are evident in Berlinski's writings which remind me of the views of Karl Popper (who openly espoused Cartesian dualism).
Popper also suggested at one time, like Berlinski, that the notion of natural selection was vacuous. But, to his credit, Popper changed his mind on this.
I am certainly uncomfortable with Berlinski's links with the 'intelligent design' movement. I don't think he does himself any credit by associating with, allowing himself to be used by and directly and indirectly profiting from those whose religious understanding is rather less sophisticated than his own.
I am not accusing him of intellectual dishonesty. My best guess however is that he is guilty of - how shall I put it? - a certain intellectual recklessness and love of debate for its own sake. Or perhaps he is driven in these matters merely by the pleasure of baiting certain notable atheists.
The obituary he wrote for Christopher Hitchens is a gem. The picture of the two of them - Hitchens gravely ill - having a cigarette and a philosophical chat on "a forlorn hotel loading ramp" in Birmingham, Alabama will stay with me.
Does Berlinski have a hidden religious agenda? He is a mysterian like Ludwig Wittgenstein (on whom he wrote his PhD thesis), like Popper, and like Roger Penrose (to whom he refers and whose general attitude to human consciousness he shares).
Martin Gardner also comes to mind in this context. Though he lacked the academic credentials of the others I have mentioned, he, like Berlinski, was a popularizer of mathematics and a professional skeptic who wrote for a living. He was also an arch-mysterian.
Some people want answers to the big questions, and feel unsatisfied and incomplete if plausible answers are not forthcoming. Others, and Berlinski is among them, don't really want to know at all. They are exhilarated by mystery. Berlinski once observed:
"I mean, deep down we all have a sense that the world is a more mysterious or stranger place not only than we imagine it, but than we can possibly imagine."
If I'm not mistaken, Berlinski's ultimate mystery is equivalent to what medieval thinkers called deus absconditus, the hidden God.
Labels:
belief,
David Berlinski,
religion,
science,
skepticism
Friday, June 1, 2012
Richard Dawkins on David Berlinski
I may have more to say on the strange case of David Berlinski and his religious beliefs or lack thereof in the future, not just because he is a fascinating character but also because such cases highly intelligent people who seem to espouse a heterodox religious outlook can sometimes challenge those of us who see ourselves as physicalists in a way more mundane believers cannot.
By way of background, this piece includes some interesting quotes from Berlinski and from Richard Dawkins.
Dawkins had asserted that anyone who claims not to believe in evolution is ignorant, stupid, insane or wicked.
'Are there, then, any examples of anti-evolution poseurs who are not ignorant, stupid or insane, and who might be genuine candidates for the wicked category? I once shared a platform with someone called David Berlinski, who is certainly not ignorant, stupid or insane. He denies that he is a creationist, but claims strong scientific arguments against evolution (which disappointingly turn out to be the same old creationist arguments).'
Dawkins then proceeds to tell of a curious and amusing incident which made him 'wonder about Berlinski's motives.'
Dawkins, Berlinski, John Maynard Smith (the highly respected evolutionary biologist) and others were guest speakers at a debate. Maynard Smith spoke after Berlinski and made fun of his arguments. As the audience laughed, Berlinski stood up and raised a hand and reproached the audience, saying something like (Dawkins couldn't remember the exact words): "No, no! Don't laugh. Let Maynard Smith have his say! It's only fair!"
I love Berlinski's writings on the history of mathematics and logic; I have not yet had a close look at his comments or claims about evolution. Of course, Dawkins's talk of wickedness is silly, but I must admit that I share some of his uneasiness about the man. What does Berlinski believe?
He calls himself an agnostic but his antipathy to evolutionary theory as well as his very high regard for the Jewish scriptures and the strange presence (as imaginary characters) of cardinals and Jesuits in his historical works suggests to me that he is a believer of sorts, though a self-consciously enigmatic one.
By way of background, this piece includes some interesting quotes from Berlinski and from Richard Dawkins.
Dawkins had asserted that anyone who claims not to believe in evolution is ignorant, stupid, insane or wicked.
'Are there, then, any examples of anti-evolution poseurs who are not ignorant, stupid or insane, and who might be genuine candidates for the wicked category? I once shared a platform with someone called David Berlinski, who is certainly not ignorant, stupid or insane. He denies that he is a creationist, but claims strong scientific arguments against evolution (which disappointingly turn out to be the same old creationist arguments).'
Dawkins then proceeds to tell of a curious and amusing incident which made him 'wonder about Berlinski's motives.'
Dawkins, Berlinski, John Maynard Smith (the highly respected evolutionary biologist) and others were guest speakers at a debate. Maynard Smith spoke after Berlinski and made fun of his arguments. As the audience laughed, Berlinski stood up and raised a hand and reproached the audience, saying something like (Dawkins couldn't remember the exact words): "No, no! Don't laugh. Let Maynard Smith have his say! It's only fair!"
I love Berlinski's writings on the history of mathematics and logic; I have not yet had a close look at his comments or claims about evolution. Of course, Dawkins's talk of wickedness is silly, but I must admit that I share some of his uneasiness about the man. What does Berlinski believe?
He calls himself an agnostic but his antipathy to evolutionary theory as well as his very high regard for the Jewish scriptures and the strange presence (as imaginary characters) of cardinals and Jesuits in his historical works suggests to me that he is a believer of sorts, though a self-consciously enigmatic one.
Labels:
agnosticism,
belief,
David Berlinski,
evolution,
religion,
Richard Dawkins
Wednesday, May 30, 2012
Genetic factors and religious orientation
Research seems to indicate that a person's basic political orientation is largely determined by genetic and very early social-environmental influences rather than rational reflection.* Similar principles apply to religious orientation, and there is a lot of research (including studies of identical twins) which indicates that, in conjunction with environmental factors, genetics plays a powerful role in determining a person's basic religious attitudes.**
The point I want to make here relates to religious orientation - not to the research results themselves but to the implications of the results.
Let us assume (not unreasonably, given the large body of research findings) that genetic and early developmental factors do play a decisive role in setting one's basic religious - or anti-religious - orientation. Surely doubting one's intuitions in the area of religion would be the only rational response.
My intuitions, as it happens, are anti-religious in the sense that I am naturally attracted to 'no nonsense' explanations, to principles like Occam's razor; and I am impatient of (what I see as) mystification on the part of those who seek to elaborate a religious view of the world.
In the realm of religion - as in the realm of politics - polemical arguments are the norm. But - as in politics - virtually nobody is convinced by their opponents. (Richard Dawkins' early books on the science of evolution were far better and, arguably, far more influential than his later polemics against religion.)
If, however, both sides accepted that their (pro- or anti-religious) intuitions had been as it were arbitrarily assigned, we would move into a very different space.
An uncomfortable space, actually. Certainly, I find it uncomfortable. It's much easier - and much more satisfying - (especially if one has strong feelings in the matter) to take sides.
Let me make it clear that I see no reason to accept the doctrines of institutional religions such as Christianity, Islam or, say, Tibetan Buddhism. But I concede that the current scientific view of things is provisional and may have major gaps and deficiencies, and some of the insights of religious thinkers may in time be vindicated.
* I have discussed this elsewhere.
** The quote from Steven Pinker incorporated into this post makes a serious point. And here (PDF file) is something a bit more substantial.
The point I want to make here relates to religious orientation - not to the research results themselves but to the implications of the results.
Let us assume (not unreasonably, given the large body of research findings) that genetic and early developmental factors do play a decisive role in setting one's basic religious - or anti-religious - orientation. Surely doubting one's intuitions in the area of religion would be the only rational response.
My intuitions, as it happens, are anti-religious in the sense that I am naturally attracted to 'no nonsense' explanations, to principles like Occam's razor; and I am impatient of (what I see as) mystification on the part of those who seek to elaborate a religious view of the world.
In the realm of religion - as in the realm of politics - polemical arguments are the norm. But - as in politics - virtually nobody is convinced by their opponents. (Richard Dawkins' early books on the science of evolution were far better and, arguably, far more influential than his later polemics against religion.)
If, however, both sides accepted that their (pro- or anti-religious) intuitions had been as it were arbitrarily assigned, we would move into a very different space.
An uncomfortable space, actually. Certainly, I find it uncomfortable. It's much easier - and much more satisfying - (especially if one has strong feelings in the matter) to take sides.
Let me make it clear that I see no reason to accept the doctrines of institutional religions such as Christianity, Islam or, say, Tibetan Buddhism. But I concede that the current scientific view of things is provisional and may have major gaps and deficiencies, and some of the insights of religious thinkers may in time be vindicated.
* I have discussed this elsewhere.
** The quote from Steven Pinker incorporated into this post makes a serious point. And here (PDF file) is something a bit more substantial.
Labels:
genetic factors,
religion,
religious orientation,
skepticism
Friday, May 25, 2012
Apparitions
In David Berlinski's world, the great mathematicians and logicians of times past are alive and well, and make regular appearances in the here and now. Leibniz comes late one night to sit by Berlinski's desk and discuss his curious notion of an encyclopedia of human concepts, "his lush old-fashioned wig proving irresistible to my cats, who have come creeping from their tower to bat at it."*
And Gottlob Frege [1848-1925] somewhat surprisingly team-taught logic with Berlinski at an unnamed California college at an unspecified point of the later twentieth century:
Our classes were always well attended because logic was a prerequisite for an engineering degree, and they were, I must say, well received, Frege and I both receiving excellent if somewhat innocent standardized student evaluations, any number of students somehow saying the same thing, that while Mr. Berlinski should learn to match his ties and suits, Mr. Frege is very nice. No wonder they never complained about his clothes. Frege would dress severely, no matter the sunshine, which even in February seemed to light up every corner of the campus, wearing the same black frock coat and batwing collar that, no doubt, he had worn in Germany. You must imagine the man at the blackboard, the thick German chalk in his fingers, his back always toward our students and the logical symbols going up and down the board, the steps separated, when necessary, by heavily drawn lines.
Is Berlinski romanticizing the intellectual figures of the past in his literary fantasies? No doubt he is. But then perhaps Frege and other pioneers of logic were indeed special, and alive to the wider implications of their work in a way most contemporary logicians are not. This may be because they were wrestling with big ideas rather than merely technical elaborations of those ideas. Peter Smith has recently made the point that today's more 'advanced' logic is less philosophically interesting than the more basic stuff (in effect the work of the great pioneers).
Berlinski was taught logic by the legendary Alonzo Church, and so has in my mind a certain reflected glory, a certain aura, as he is a link to a great age of human thought. Like an old colleague and friend of mine who was taught by Willard Van Orman Quine, a fact which - quite irrationally - made me very forgiving of his many personal failings.**
One last quote...
Sometime in the fall after the spring in which Frege and I had taught logic together in California, my great friend, the logician DG took his life. He had loved someone a great deal and for a very long time, and when it was over he had only logic left and logic was not enough. He was cremated in Colma at the insistence of his wife; I watched as the conveyor belt took his coffin toward the winking red lights; there was a roar from far away as the gas-fired jets ignited, and two hours later, I was given a plain wooden box with his ashes.
I took the box with me to one of those sparse California hills, which are covered with chaparral and a few scrub oaks standing in copses.
I was about to scatter the ashes when I noticed that Frege had joined me. He was dressed, as always, in black. I opened the box and let the salt-smelling wind carry the ashes far away.
Frege looked into the middle distance. I thought he would remain silent.
"I always come for my own," he said, just before vanishing himself, leaving me alone with the smell of wild sage.
* All quotations are from Berlinski's The Advent of the Algorithm (Harcourt 2001).
** In truth, he is a good and intelligent man. (Hello John!)
And Gottlob Frege [1848-1925] somewhat surprisingly team-taught logic with Berlinski at an unnamed California college at an unspecified point of the later twentieth century:
Our classes were always well attended because logic was a prerequisite for an engineering degree, and they were, I must say, well received, Frege and I both receiving excellent if somewhat innocent standardized student evaluations, any number of students somehow saying the same thing, that while Mr. Berlinski should learn to match his ties and suits, Mr. Frege is very nice. No wonder they never complained about his clothes. Frege would dress severely, no matter the sunshine, which even in February seemed to light up every corner of the campus, wearing the same black frock coat and batwing collar that, no doubt, he had worn in Germany. You must imagine the man at the blackboard, the thick German chalk in his fingers, his back always toward our students and the logical symbols going up and down the board, the steps separated, when necessary, by heavily drawn lines.
Is Berlinski romanticizing the intellectual figures of the past in his literary fantasies? No doubt he is. But then perhaps Frege and other pioneers of logic were indeed special, and alive to the wider implications of their work in a way most contemporary logicians are not. This may be because they were wrestling with big ideas rather than merely technical elaborations of those ideas. Peter Smith has recently made the point that today's more 'advanced' logic is less philosophically interesting than the more basic stuff (in effect the work of the great pioneers).
Berlinski was taught logic by the legendary Alonzo Church, and so has in my mind a certain reflected glory, a certain aura, as he is a link to a great age of human thought. Like an old colleague and friend of mine who was taught by Willard Van Orman Quine, a fact which - quite irrationally - made me very forgiving of his many personal failings.**
One last quote...
Sometime in the fall after the spring in which Frege and I had taught logic together in California, my great friend, the logician DG took his life. He had loved someone a great deal and for a very long time, and when it was over he had only logic left and logic was not enough. He was cremated in Colma at the insistence of his wife; I watched as the conveyor belt took his coffin toward the winking red lights; there was a roar from far away as the gas-fired jets ignited, and two hours later, I was given a plain wooden box with his ashes.
I took the box with me to one of those sparse California hills, which are covered with chaparral and a few scrub oaks standing in copses.
I was about to scatter the ashes when I noticed that Frege had joined me. He was dressed, as always, in black. I opened the box and let the salt-smelling wind carry the ashes far away.
Frege looked into the middle distance. I thought he would remain silent.
"I always come for my own," he said, just before vanishing himself, leaving me alone with the smell of wild sage.
* All quotations are from Berlinski's The Advent of the Algorithm (Harcourt 2001).
** In truth, he is a good and intelligent man. (Hello John!)
Saturday, May 19, 2012
Possible worlds and possible worlds
I have trouble seeing philosophy as an intellectual discipline. The gist of my thinking is that 'philosophy' is a word which has changed its meaning quite dramatically over the centuries as various sciences have split off from it and I'm not sure that it has much meaning left.
If one has a theological view of the world, philosophy's position will not be threatened as it can resume (or continue) its traditional role as a secular complement to theological discourse. But if one denies that there are truths we can intuit or know by non-empirical, non-deductive means, then, arguably, there is no place for a non-scientific intellectual discipline (unless it be seen as an art form or as a kind of game).
Of course, the study of formal deductive systems, logical or mathematical, is non-empirical, but it is continuous with science.
All that is left of philosophy for someone who rejects claims to substantive intuitive knowledge of a religious or moral kind are reflections on the various intellectual disciplines (physical, social and historical sciences, mathematics, logic, etc.). Such meta-thinking is best carried out (I would presume) by the practitioners of the various intellectual disciplines rather than by outsiders (whether or not they are designated as 'philosophers').
I do recognize, however, that much pure and applied work in certain disciplines (logic, mathematics, psychology and linguistics come to mind) draws strongly on philosophical traditions of thought, and raises issues which previously have been addressed by philosophers. An example of such work is the attempt (drawing on theoretical work in logic and mathematics as well as linguistics) to model the processes of natural language.
Computational linguistics clearly has great practical and commercial importance at the moment, but it can also be seen as a project the relative success [or failure!] of which has implications for the way we see human language and ourselves.
My reading of the current state of play is that the formal approaches which followed in the wake of Chomsky's early attempts to give an explicit analysis of the syntax of ordinary language have not delivered as expected, just as early work in the field of artificial intelligence produced very disappointing results. Both of these research projects underestimated the importance of contextual factors and real world knowledge which is inevitably a part of intelligent human functioning and communication. Formal systems need in some way to be integrated into this real world context, but, even if they are, it is still possible that many important aspects of language and communication will remain out of reach. I am thinking in particular of aspects of language use which depend on social awareness, a sense of the sorts of things that people with autism spectrum disorders have trouble dealing with, including subtleties of tone and style.
There is a huge body of theoretical work in the syntax and semantics of natural language which shows, if nothing else, that there are countless ways of conceptualizing and formalizing (at least aspects of) natural language. In the light of this profusion, the key question it seems to me is not which theoretical approaches are true (whatever that might mean) but which are useful.*
We may want to postulate possible worlds and use set theory to model the semantics of natural language, including complex noun phrases and verb tenses and auxiliaries. But sets and 'possible worlds' are only one way (albeit a possibly enlightening one) of representing the way, for example, words like 'must' or 'could' or 'should' work. No claim need be made that such possible worlds exist. They are merely useful fictions.
Physicists, of course, also talk about other possible worlds, parallel universes and so on, but they are making ontological claims. Their concern is primarily with how the world (or the multiverse) is rather than with formal systems, though they use formal systems to model the operations of nature (as linguists may use formal systems to model the operations of natural language).
But the possible worlds of logicians and linguists are notwithstanding some outlandish claims by certain logicians merely formal constructs, to be judged entirely by their usefulness. The other worlds of the physicist may well prove in fact to be 'out there' to exist in the normal sense of the word, though they may be inaccessible to us.
I am aware that the question of what existence consists in is a traditional philosophical one, but is it a serious or potentially productive question? I think not. Most of the confusions can be resolved simply by accepting that we use words like 'exist' in various ways.
The one area which does seem to raise important issues is mathematics. Just as there are possible worlds and possible worlds (the 'worlds' of the logician and the worlds of the physicist), so there are formal systems and formal systems, and, as we move from, say, the first-order predicate calculus to formal systems which can encompass arithmetic we cross a kind of threshold. Mathematics needs to be clearly distinguished from logic. But this is a topic for another time.
* This is not to say that the exercise of trying to create formal representations of natural languages may not reveal interesting things about natural language and provide new, more concise, more explanatory ways of understanding aspects of the grammar of those languages than traditional grammars provided. But, although such rarefied goals are not pointless, nor are they the sorts of goals for which society is likely to provide support. Traditional grammarians were, after all, essentially pedagogues and their grammars were pedagogical aids.
If one has a theological view of the world, philosophy's position will not be threatened as it can resume (or continue) its traditional role as a secular complement to theological discourse. But if one denies that there are truths we can intuit or know by non-empirical, non-deductive means, then, arguably, there is no place for a non-scientific intellectual discipline (unless it be seen as an art form or as a kind of game).
Of course, the study of formal deductive systems, logical or mathematical, is non-empirical, but it is continuous with science.
All that is left of philosophy for someone who rejects claims to substantive intuitive knowledge of a religious or moral kind are reflections on the various intellectual disciplines (physical, social and historical sciences, mathematics, logic, etc.). Such meta-thinking is best carried out (I would presume) by the practitioners of the various intellectual disciplines rather than by outsiders (whether or not they are designated as 'philosophers').
I do recognize, however, that much pure and applied work in certain disciplines (logic, mathematics, psychology and linguistics come to mind) draws strongly on philosophical traditions of thought, and raises issues which previously have been addressed by philosophers. An example of such work is the attempt (drawing on theoretical work in logic and mathematics as well as linguistics) to model the processes of natural language.
Computational linguistics clearly has great practical and commercial importance at the moment, but it can also be seen as a project the relative success [or failure!] of which has implications for the way we see human language and ourselves.
My reading of the current state of play is that the formal approaches which followed in the wake of Chomsky's early attempts to give an explicit analysis of the syntax of ordinary language have not delivered as expected, just as early work in the field of artificial intelligence produced very disappointing results. Both of these research projects underestimated the importance of contextual factors and real world knowledge which is inevitably a part of intelligent human functioning and communication. Formal systems need in some way to be integrated into this real world context, but, even if they are, it is still possible that many important aspects of language and communication will remain out of reach. I am thinking in particular of aspects of language use which depend on social awareness, a sense of the sorts of things that people with autism spectrum disorders have trouble dealing with, including subtleties of tone and style.
There is a huge body of theoretical work in the syntax and semantics of natural language which shows, if nothing else, that there are countless ways of conceptualizing and formalizing (at least aspects of) natural language. In the light of this profusion, the key question it seems to me is not which theoretical approaches are true (whatever that might mean) but which are useful.*
We may want to postulate possible worlds and use set theory to model the semantics of natural language, including complex noun phrases and verb tenses and auxiliaries. But sets and 'possible worlds' are only one way (albeit a possibly enlightening one) of representing the way, for example, words like 'must' or 'could' or 'should' work. No claim need be made that such possible worlds exist. They are merely useful fictions.
Physicists, of course, also talk about other possible worlds, parallel universes and so on, but they are making ontological claims. Their concern is primarily with how the world (or the multiverse) is rather than with formal systems, though they use formal systems to model the operations of nature (as linguists may use formal systems to model the operations of natural language).
But the possible worlds of logicians and linguists are notwithstanding some outlandish claims by certain logicians merely formal constructs, to be judged entirely by their usefulness. The other worlds of the physicist may well prove in fact to be 'out there' to exist in the normal sense of the word, though they may be inaccessible to us.
I am aware that the question of what existence consists in is a traditional philosophical one, but is it a serious or potentially productive question? I think not. Most of the confusions can be resolved simply by accepting that we use words like 'exist' in various ways.
The one area which does seem to raise important issues is mathematics. Just as there are possible worlds and possible worlds (the 'worlds' of the logician and the worlds of the physicist), so there are formal systems and formal systems, and, as we move from, say, the first-order predicate calculus to formal systems which can encompass arithmetic we cross a kind of threshold. Mathematics needs to be clearly distinguished from logic. But this is a topic for another time.
* This is not to say that the exercise of trying to create formal representations of natural languages may not reveal interesting things about natural language and provide new, more concise, more explanatory ways of understanding aspects of the grammar of those languages than traditional grammars provided. But, although such rarefied goals are not pointless, nor are they the sorts of goals for which society is likely to provide support. Traditional grammarians were, after all, essentially pedagogues and their grammars were pedagogical aids.
Subscribe to:
Posts (Atom)