Science is not – nor can it be, in fact – immune to ideological influences. Sometimes such influences may have a positive effect, but it would be naive to believe that such factors do not have the potential to cause distortions also.
Scientists, like anybody else, need to be motivated and often this involves them seeing their own research as defending or furthering broad convictions they might have about human nature or the world in general.
There are many cases of great scientists whose major contributions to science were largely inspired by what we now see as utterly false assumptions. Copernicus and Newton might both be seen as examples of this, their discoveries as it were transcending the flawed intellectual matrix – or worldview – within which the theories were framed.
The institutions and practices of modern science are not designed to screen out personal biases and unwarranted assumptions so much as to ensure that published conjectures and theories and experimental results are exposed to rigorous testing and assessment procedures. The system works pretty well on the whole, encouraging intellectual rigor while not excluding the human element – imagination, creativity, etc. – which is essential for innovative thinking.
Areas such as evolutionary biology and the human sciences are particularly prone to ideological influences.
I have previously hinted at such influences in the case of research into linguistic development and evolution, notably in relation to the work of Michael Tomasello and his colleagues who seem to be adamantly opposed to certain formal approaches to the study of language. I am following up on this, and will have more to say in the future. (James Hurford's views appear to chart a sensible middle course, and are looking very plausible to me at the moment.)
And I have recently come across another example of ideology apparently driving scientific judgment and interpretation.
Last week Massimo Pigliucci published a list of his 'best' research papers on biological topics. It's clear from this list (and another on his Curriculum Vitae) that Pigliucci had from the beginning of his research career a special interest in defending and promoting the notion of phenotypic plasticity – the property of the genotype to produce different phenotypes in response to different environments.
In just about all the cited papers – most involving experiments with plants – the power of environmental factors to alter features of the organism are emphasized. A cursory look at the abstracts certainly suggests that the researchers (the papers are collaborative efforts) are highly unsympathetic to any approaches which could be construed as tending in the general direction of what has sometimes been characterized as genetic determinism.
Which is fine. It's only to be expected that researchers will approach such issues with strong opinions, and a degree of adversarial debate and discussion can be productive. In the end, the weight of evidence usually settles disputes, and the controversies then move on to other areas.
So I am not questioning the scientific value of Pigliucci's work – the scope and nature of phenotypic plasticity is clearly a topic of considerable interest.
But it is interesting to juxtapose his research interests in biology with his published comments about human intelligence.
In another of his recent blog posts, Pigliucci claims that environmental – cultural, in fact – factors are solely responsible for differences in patterns of involvement by males and females in different research areas. Genes don't have anything to do with it, apparently.
"[T]he fact," he writes, "that there are fewer women than men in a given field is likely the result of a large number of cultural factors (no, I don’t think it has anything at all to do with “native” intelligence, Larry Summers be damned)."
A commenter makes the point that "the greater variance of male intelligence is well established", and that genetic factors are obviously involved. The greater variance of male intelligence in this context means essentially that there is a greater proportion of individuals with very high intelligence amongst men than amongst women (and also a greater proportion of individuals with very low intelligence).
It is not impossible that some purely environmental explanation for this pattern could be found, but the evidence, even if it is not conclusive at this stage, certainly points to an at least partly genetic explanation. So the fact that Pigliucci seems to have a very strong disinclination to accept that genetics is significant here clearly goes beyond the science and points to a prior ideological commitment.
The emotional tone of his references to Lawrence Summers may not strengthen but certainly doesn't weaken my case. "I can't stand the bastard," Professor Pigliucci notes in a comment.
Pigliucci's strong ideological and moral convictions – which no doubt played a part in his decision some years ago to shift his focus from science to philosophy – may be able to be explained largely in terms of cultural factors.
But I just can't help thinking about Massimo's (hypothetical) monozygotic twin who was raised by a Swedish family. Did he too follow a scientific career? Does he have a penchant for bow ties? Is he a religious skeptic? Does he too have strong views on political and social questions? And what is his attitude to Lawrence Summers, I wonder?
Wednesday, August 28, 2013
Sunday, August 11, 2013
Life, death and computation
I have been spending a bit too much time lately reading other people's blogs and (to some extent) participating in associated discussions. The main problem with this sort of activity is that – largely because the focus of discussions is always shifting – it encourages superficial debate at the expense of deep understanding.
But, interestingly, two recent blog discussions on two very different sites which I happen to follow touch on a similar theme.
Biologist and philosopher Massimo Pigliucci recently precipitated a freewheeling discussion of the relevance of computers and computing to understanding the human mind and the universe in general. In fact, Pigliucci's post on the topic prompted more than 200 comments, many of which are well worth reading.
Professor Pigliucci has a disarming tendency to rush in where more cautious academics fear to tread – that is, beyond his areas of specific expertise. (I suspect his approach owes something to the intellectual traditions of his native Italy, where academics have traditionally played an important role in the broader cultural, moral and political sphere.)
Pigliucci argues strongly against functionalist and computational views of the mind. I don't have strong views on this question, though I share Pigliucci's skepticism about some of the (as I see it) wilder claims about mind uploading and the scope of simulations etc.
I did, however, question his contention that seeing the operations of nature in computational terms is likely to lead to mathematical Platonism, commenting as follows:
My understanding is that many of the leading proponents of an information- and information processing-based approach to physics see information as physical. The bits or qubits are always 'embodied' in actual physical processes, albeit that these processes are understood at a deep level in terms of the processing of information. (There are close parallels between information theory and thermodynamics.)
So I'm not sure that such a view leads to Platonism. Seeing physical processes as algorithmic (and scientific theories as predictive algorithms) seems to me a genuinely interesting perspective: but it may well be that there is no way actual physical processes can be perfectly simulated (or predicted).
Adrian McKinty is a novelist with a strong interest in social, cultural and philosophical topics. In the comment thread of a post at McKinty's nicely named site, The Psychopathology of Everyday Life (I know – Freud got there first), a post about Philip Larkin featuring his confronting poem, 'Aubade', McKinty mentions Nick Bostrom's simulation argument: that if we accept two fairly plausible-seeming assumptions then our universe is almost certainly a 'simulated' universe created by an advanced civilization.
As I commented there:
I am ... (prompted by your comments, Adrian) having a look at Nick Bostrom's ideas. My initial attitude is skepticism, but that may just be what he would call my status quo bias jumping in.
I do think it makes sense (simply in terms of physics) to see natural processes in terms of information processing, but it is a big jump from there to thinking about beings who might have set the process going (and to calling it a simulation).
And what would Larkin make of all this? (Turning in his grave, I suspect.)
I am continuing to look into the simulation argument which I first encountered some years ago. More later, perhaps.
But regular readers will know that I am very skeptical of arguments and points of view which take their origins from a philosophical (as distinct from a scientific) base. Bostrom's main argument for the simulation hypothesis is in part statistical but basically philosophical – and far from convincing from my point of view.
I can't help feeling that people like Bostrom (and David Pearce who influenced him) are driven by a kind of religious instinct. Certainly some of the groups with which they are associated have a cultish feel.
The other thinker mentioned by Adrian in the comment thread is Samuel Scheffler. Scheffler applies 'what if' scenarios to thinking about death. What if we knew the world was going to be destroyed soon after our death? His general point seems to be that we are underlyingly less concerned about our own personal fate per se than about our fate seen in the light of a continuing social context.
This may well be, and such thinking is very much in accordance with the view that the sense of self derives from the linguistic, cultural and social context in which we grow up. But I think Scheffler overplays the extent to which future generations give meaning to our lives.
Also, I had a look at Scheffler's background. And it seems pretty clear that his being a socialist (he is apparently a disciple of the 'analytical Marxist' G.A. Cohen) would have – to some extent at least – shaped and played a part in his approach to thinking about the future in general, and about ethics.
But, interestingly, two recent blog discussions on two very different sites which I happen to follow touch on a similar theme.
Biologist and philosopher Massimo Pigliucci recently precipitated a freewheeling discussion of the relevance of computers and computing to understanding the human mind and the universe in general. In fact, Pigliucci's post on the topic prompted more than 200 comments, many of which are well worth reading.
Professor Pigliucci has a disarming tendency to rush in where more cautious academics fear to tread – that is, beyond his areas of specific expertise. (I suspect his approach owes something to the intellectual traditions of his native Italy, where academics have traditionally played an important role in the broader cultural, moral and political sphere.)
Pigliucci argues strongly against functionalist and computational views of the mind. I don't have strong views on this question, though I share Pigliucci's skepticism about some of the (as I see it) wilder claims about mind uploading and the scope of simulations etc.
I did, however, question his contention that seeing the operations of nature in computational terms is likely to lead to mathematical Platonism, commenting as follows:
My understanding is that many of the leading proponents of an information- and information processing-based approach to physics see information as physical. The bits or qubits are always 'embodied' in actual physical processes, albeit that these processes are understood at a deep level in terms of the processing of information. (There are close parallels between information theory and thermodynamics.)
So I'm not sure that such a view leads to Platonism. Seeing physical processes as algorithmic (and scientific theories as predictive algorithms) seems to me a genuinely interesting perspective: but it may well be that there is no way actual physical processes can be perfectly simulated (or predicted).
Adrian McKinty is a novelist with a strong interest in social, cultural and philosophical topics. In the comment thread of a post at McKinty's nicely named site, The Psychopathology of Everyday Life (I know – Freud got there first), a post about Philip Larkin featuring his confronting poem, 'Aubade', McKinty mentions Nick Bostrom's simulation argument: that if we accept two fairly plausible-seeming assumptions then our universe is almost certainly a 'simulated' universe created by an advanced civilization.
As I commented there:
I am ... (prompted by your comments, Adrian) having a look at Nick Bostrom's ideas. My initial attitude is skepticism, but that may just be what he would call my status quo bias jumping in.
I do think it makes sense (simply in terms of physics) to see natural processes in terms of information processing, but it is a big jump from there to thinking about beings who might have set the process going (and to calling it a simulation).
And what would Larkin make of all this? (Turning in his grave, I suspect.)
I am continuing to look into the simulation argument which I first encountered some years ago. More later, perhaps.
But regular readers will know that I am very skeptical of arguments and points of view which take their origins from a philosophical (as distinct from a scientific) base. Bostrom's main argument for the simulation hypothesis is in part statistical but basically philosophical – and far from convincing from my point of view.
I can't help feeling that people like Bostrom (and David Pearce who influenced him) are driven by a kind of religious instinct. Certainly some of the groups with which they are associated have a cultish feel.
The other thinker mentioned by Adrian in the comment thread is Samuel Scheffler. Scheffler applies 'what if' scenarios to thinking about death. What if we knew the world was going to be destroyed soon after our death? His general point seems to be that we are underlyingly less concerned about our own personal fate per se than about our fate seen in the light of a continuing social context.
This may well be, and such thinking is very much in accordance with the view that the sense of self derives from the linguistic, cultural and social context in which we grow up. But I think Scheffler overplays the extent to which future generations give meaning to our lives.
Also, I had a look at Scheffler's background. And it seems pretty clear that his being a socialist (he is apparently a disciple of the 'analytical Marxist' G.A. Cohen) would have – to some extent at least – shaped and played a part in his approach to thinking about the future in general, and about ethics.
Labels:
computation,
death,
life,
Massimo Pigliucci,
Nick Bostrom,
reality,
Samuel Scheffler
Thursday, July 25, 2013
Empathy and language
The practice of pointing by infants raises some interesting questions about the psychological foundations upon which human communicational and linguistic capacities are built.
As explained in an article cited in the comments section of the previous post, young children routinely point to direct the attention of a nearby adult to something the infant finds interesting and apparently wishes the adult to see and appreciate also.
When an infant doesn't start pointing by the appropriate age (about 12 months), it's often a sign that they don't have an intuitive sense of other minds – and also of linguistic problems ahead. (I originally came across discussions of this phenomenon in material on identifying the early signs of autism.)
The article referred to above draws on papers by Michael Tomasello and his colleagues which explore the phenomenon of infant pointing and associated behaviors. Tomasello and his fellow researchers argue for "a deeply social view [of the process] in which infant pointing is best understood – on many levels and in many ways – as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication."
The researchers note that the kind of pointing they discuss is unique to humans and depends on certain key insights about the existence and nature of other minds as well as emotional factors – essentially a desire to share one's perceptions and to share in the perceptions of others.
A cursory reading of sources cited in the Slate article and related material suggests to me that Tomasello and his colleagues may well be overplaying their intuitions about sharing in their claims about the origins and development of human communication and language.
Of course, emotional factors cannot be ignored, but could not these elements be explained in terms of cognitive imperatives and the practical benefits of collaboration and reliable information transfer?
György Gergely and Gergely Csibra explicitly challenge Tomasello's views on the centrality of the emotions associated with shared intentionality and focus instead on the communication mechanisms necessary to ensure efficient cultural learning.
A crucial point relates to the efficacy of the highlighted emotions. Tomasello and his colleagues posit the desire to share emotional states as a key explanatory factor rather than merely as one element in a diverse suite of human abilities and behaviors.
But I am nowhere near having a sufficiently strong grasp of the material to take sides in this dispute.
It is clear that the same (or similar) perceptions and feelings which apparently motivate gestural communication – however we might characterize them – certainly do seem, in normal infants, also to motivate and facilitate the child's rapid and apparently easy acquisition of whatever language or languages they are routinely exposed to.
Significantly, though, the complexities of language can be learned (albeit often with some difficulty) even by those who lack a strong intuitive sense of other minds.
It's certainly plausible that the historical development both of prelinguistic modes of communication (like pointing) and language amongst our ancestors was dependent upon (amongst other things) certain empathetic perceptions and feelings. But, of course, the cognitive and affective factors involved are in practice always inextricably linked, sometimes in very complicated ways.
In his work on autism, Simon Baron-Cohen distinguishes between the cognitive and affective aspects of empathy. Cognitive empathy is all about what we perceive and understand about the mental states of others, whereas affective empathy concerns our emotional responses to this knowledge. Strength or appropriate responses in one area does not necessarily entail strength or appropriate responses in the other.
For example, the autistic person typically scores poorly on tests of cognitive empathy (e.g. reading particular emotions in pictures of faces cropped to reveal little more than the eyes), but often exhibits appropriate affective responses (e.g. to perceived suffering). By contrast, the psychopath typically has no problem at all with cognitive empathy (or language, for that matter), but displays deficiencies in terms of affective response.
Speculations about the way language evolved will necessarily draw on the findings of cognitive and developmental psychology as well as other areas. But, while it is reasonable to assume that affective responses played a role in the development of language, I have some doubts about the way Tomasello and his colleagues present the basic issues and about some of their key claims.
Also, as someone with a background in formal approaches to language and syntax, I am naturally wary of approaches which downplay the significance of this side of things. I was unimpressed, for example, by the comments by one of Tomasello's co-researchers, Malinda Carpenter, quoted in the Slate article.
The fact that pointing seems to call on a sophisticated understanding of what is going on in the heads of other people, she noted, "suggests that [infants] can do so much more with pointing prelinguistically than we ever thought before."
Until recently, people thought that this sort of knowledge only emerged with language. But when Carpenter, who was drawn to this work through an initial interest in language, started looking at prelinguistic gestures, her perspective changed.
"[E]verything’s already there!" she said. "I completely lost interest in language because you can see so much complexity already in infants' gestures."
It depends on what you mean by 'everything', I suppose, but I would have thought that language adds a little something to the mix.
As explained in an article cited in the comments section of the previous post, young children routinely point to direct the attention of a nearby adult to something the infant finds interesting and apparently wishes the adult to see and appreciate also.
When an infant doesn't start pointing by the appropriate age (about 12 months), it's often a sign that they don't have an intuitive sense of other minds – and also of linguistic problems ahead. (I originally came across discussions of this phenomenon in material on identifying the early signs of autism.)
The article referred to above draws on papers by Michael Tomasello and his colleagues which explore the phenomenon of infant pointing and associated behaviors. Tomasello and his fellow researchers argue for "a deeply social view [of the process] in which infant pointing is best understood – on many levels and in many ways – as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication."
The researchers note that the kind of pointing they discuss is unique to humans and depends on certain key insights about the existence and nature of other minds as well as emotional factors – essentially a desire to share one's perceptions and to share in the perceptions of others.
A cursory reading of sources cited in the Slate article and related material suggests to me that Tomasello and his colleagues may well be overplaying their intuitions about sharing in their claims about the origins and development of human communication and language.
Of course, emotional factors cannot be ignored, but could not these elements be explained in terms of cognitive imperatives and the practical benefits of collaboration and reliable information transfer?
György Gergely and Gergely Csibra explicitly challenge Tomasello's views on the centrality of the emotions associated with shared intentionality and focus instead on the communication mechanisms necessary to ensure efficient cultural learning.
A crucial point relates to the efficacy of the highlighted emotions. Tomasello and his colleagues posit the desire to share emotional states as a key explanatory factor rather than merely as one element in a diverse suite of human abilities and behaviors.
But I am nowhere near having a sufficiently strong grasp of the material to take sides in this dispute.
It is clear that the same (or similar) perceptions and feelings which apparently motivate gestural communication – however we might characterize them – certainly do seem, in normal infants, also to motivate and facilitate the child's rapid and apparently easy acquisition of whatever language or languages they are routinely exposed to.
Significantly, though, the complexities of language can be learned (albeit often with some difficulty) even by those who lack a strong intuitive sense of other minds.
It's certainly plausible that the historical development both of prelinguistic modes of communication (like pointing) and language amongst our ancestors was dependent upon (amongst other things) certain empathetic perceptions and feelings. But, of course, the cognitive and affective factors involved are in practice always inextricably linked, sometimes in very complicated ways.
In his work on autism, Simon Baron-Cohen distinguishes between the cognitive and affective aspects of empathy. Cognitive empathy is all about what we perceive and understand about the mental states of others, whereas affective empathy concerns our emotional responses to this knowledge. Strength or appropriate responses in one area does not necessarily entail strength or appropriate responses in the other.
For example, the autistic person typically scores poorly on tests of cognitive empathy (e.g. reading particular emotions in pictures of faces cropped to reveal little more than the eyes), but often exhibits appropriate affective responses (e.g. to perceived suffering). By contrast, the psychopath typically has no problem at all with cognitive empathy (or language, for that matter), but displays deficiencies in terms of affective response.
Speculations about the way language evolved will necessarily draw on the findings of cognitive and developmental psychology as well as other areas. But, while it is reasonable to assume that affective responses played a role in the development of language, I have some doubts about the way Tomasello and his colleagues present the basic issues and about some of their key claims.
Also, as someone with a background in formal approaches to language and syntax, I am naturally wary of approaches which downplay the significance of this side of things. I was unimpressed, for example, by the comments by one of Tomasello's co-researchers, Malinda Carpenter, quoted in the Slate article.
The fact that pointing seems to call on a sophisticated understanding of what is going on in the heads of other people, she noted, "suggests that [infants] can do so much more with pointing prelinguistically than we ever thought before."
Until recently, people thought that this sort of knowledge only emerged with language. But when Carpenter, who was drawn to this work through an initial interest in language, started looking at prelinguistic gestures, her perspective changed.
"[E]verything’s already there!" she said. "I completely lost interest in language because you can see so much complexity already in infants' gestures."
It depends on what you mean by 'everything', I suppose, but I would have thought that language adds a little something to the mix.
Monday, July 8, 2013
A science of language?
A large part of the fascination which language holds for many is that it is one of the key markers of our humanity. Language is at the heart of human culture and human consciousness. Tense and aspect mark our sense of time, grammatical mood our sense of possibility, personal and possessive pronouns our very sense of identity and how we see ourselves as relating to other people and things.
Partly because language is an inextricable and defining part of us – and at once social and individual – it is impossible to clearly define a science of language in the way most other sciences can be defined.
To what extent should the study of language be subsumed into psychology and neuroscience? Language is behaviour, and the human language faculty can only be said to be understood to the extent that the neurological processes which drive it are known.
On the other hand, language is also a cultural object which can be studied in its own right, both structurally and historically.
It's hardly surprising, then, that, since its rise to prominence in the 19th and 20th centuries, linguistics has, as sciences go, been unusually riven by competing frameworks and approaches, and these divisions have, if anything, increased over time. (Though I sometimes wonder how different things might have been if the later-20th century's most prominent linguist had not been such a relentless intellectual warrier and contrarian!)
Ultimately, the divisions between the sciences are merely for practical and administrative purposes: the quality – and worthwhileness – of research is not generally determined by discipline-specific but rather by more general criteria.
But I don't want to get into an abstract discussion about the unity of science or related matters. I really just wanted to make the point that language represents not so much a subject area as a number of interrelated subject areas. And, because the phenomenon of language can be approached from very different directions, it is difficult, if not impossible, to pull all these perspectives – and the knowledge implicit in them – together.
Perhaps, then, the best we can do is to focus on specific questions which may happen to relate to language in one way or another and to renounce as unrealistic the desire for a comprehensive understanding of the phenomenon of language per se.
I'll finish by mentioning a couple of language-related topics which I have been thinking about lately.
Last month I referred to the ideas of Simon Fisher and Matt Ridley on culture-driven gene evolution. The work of Fisher and others has shown that the FOXP2 gene has a crucial role to play in human linguistic abilities. The gene occurs in other species in slightly different forms and it plays various roles. Interestingly, it has been shown to play a key role in vocal expression in both birds (canaries and finches) and chimpanzees as well as in humans. Neanderthals are now believed to have had exactly the same form of the FOXP2 gene as modern humans.
I can't help thinking that the question of the origin of language retains its fascination in part because it promises to reveal something important about who we are and where we came from.
This is, I think, largely an illusion based on the idea that the abrupt discontinuity we see between ourselves and our nearest relatives (chimpanzees) always was. But intermediate forms did exist (until relatively recently, in fact).
In practice, I think we tend to assume, consciously or unconsciously, that our species has an essence.
It hasn't. Nonetheless, the development of human language as we know it does mark a clear historical and cultural discontinuity.
On a more practical note, I have also been thinking about the reputed benefits of bilingualism. It has been claimed, for instance, that bilingualism can delay the onset of the symptoms of Alzheimer's disease by about five years. I have some reservations about the significance of these claims. More another time.
Partly because language is an inextricable and defining part of us – and at once social and individual – it is impossible to clearly define a science of language in the way most other sciences can be defined.
To what extent should the study of language be subsumed into psychology and neuroscience? Language is behaviour, and the human language faculty can only be said to be understood to the extent that the neurological processes which drive it are known.
On the other hand, language is also a cultural object which can be studied in its own right, both structurally and historically.
It's hardly surprising, then, that, since its rise to prominence in the 19th and 20th centuries, linguistics has, as sciences go, been unusually riven by competing frameworks and approaches, and these divisions have, if anything, increased over time. (Though I sometimes wonder how different things might have been if the later-20th century's most prominent linguist had not been such a relentless intellectual warrier and contrarian!)
Ultimately, the divisions between the sciences are merely for practical and administrative purposes: the quality – and worthwhileness – of research is not generally determined by discipline-specific but rather by more general criteria.
But I don't want to get into an abstract discussion about the unity of science or related matters. I really just wanted to make the point that language represents not so much a subject area as a number of interrelated subject areas. And, because the phenomenon of language can be approached from very different directions, it is difficult, if not impossible, to pull all these perspectives – and the knowledge implicit in them – together.
Perhaps, then, the best we can do is to focus on specific questions which may happen to relate to language in one way or another and to renounce as unrealistic the desire for a comprehensive understanding of the phenomenon of language per se.
I'll finish by mentioning a couple of language-related topics which I have been thinking about lately.
Last month I referred to the ideas of Simon Fisher and Matt Ridley on culture-driven gene evolution. The work of Fisher and others has shown that the FOXP2 gene has a crucial role to play in human linguistic abilities. The gene occurs in other species in slightly different forms and it plays various roles. Interestingly, it has been shown to play a key role in vocal expression in both birds (canaries and finches) and chimpanzees as well as in humans. Neanderthals are now believed to have had exactly the same form of the FOXP2 gene as modern humans.
I can't help thinking that the question of the origin of language retains its fascination in part because it promises to reveal something important about who we are and where we came from.
This is, I think, largely an illusion based on the idea that the abrupt discontinuity we see between ourselves and our nearest relatives (chimpanzees) always was. But intermediate forms did exist (until relatively recently, in fact).
In practice, I think we tend to assume, consciously or unconsciously, that our species has an essence.
It hasn't. Nonetheless, the development of human language as we know it does mark a clear historical and cultural discontinuity.
On a more practical note, I have also been thinking about the reputed benefits of bilingualism. It has been claimed, for instance, that bilingualism can delay the onset of the symptoms of Alzheimer's disease by about five years. I have some reservations about the significance of these claims. More another time.
Monday, June 17, 2013
The adjective not the noun
I – and others – have been reflecting lately on the concept of political conservatism, and these reflections have prompted some inchoate – and totally non-partisan – meta-thoughts on the problems of political ideology which I have set out below.
One assumption behind most reflections on conservatism (or on any political ideology) is that it is desirable to have a consciously worked out (personal) political philosophy. And the assumption behind this is that it is possible somehow to assess alternatives in a rational manner and arrive at a satisfactory conclusion. This latter assumption – on which the value of the whole exercise depends – I am beginning to doubt.
When you reflect on these matters, you have to start somewhere. And where you start will be somewhat arbitrary, though it may well be in part determined by your values.
For example, you may want to maximize equality; or you may be more concerned with individual freedom; or order, or one of any number of other ideals or goals.
My starting point – reflecting perhaps the importance I place on a scientific view of the world free of metaphysical and religious baggage – would be the social nature of human identity.
Even those who think they have totally rejected the idea of a soul still cling, I believe, to a version of this idea. It is a natural belief for us to have, and I still feel it in myself.
Take this simple thought experiment. A human body could, presumably, be grown in a laboratory, nourished and exercised to develop muscles, etc. But, if it were deprived of all normal social interactions, linguistic and other cultural input, the brain would not develop normally and this body, though apparently perfectly formed and healthy, would not, as a result, constitute a person. It would not have a human identity, or human awareness. What rights would it have, if any?
This idea of a living human body with a radically undeveloped brain (due to the withholding of social inputs during development) is – to me at any rate – slightly shocking and confronting. It tells us something about ourselves: that our sense of self, our human identity comes just as much from without – from a particular social and cultural milieu – as from within. The social matrix within which we grow is an essential component of our individuality and our very humanity. We never were and can never be 'self-contained'.
This fact has implications for any social or political philosophy. I won't try to spell out the implications here, except to say that such a view is fatal for all forms of atomistic individualism.
Values, as well as often determining the starting-point for one's basic thinking about politics, also play a part in determining the direction of the argument. And this basic notion of the social self could clearly be developed in either a progressive or a conservative direction. The choice seems to depend on taste or predilection.
Which leads me to wonder whether developing such thoughts and arguments is worthwhile (other than for polemical or similar purposes).
Moral, social and political reflections and arguments move in a linear fashion like language. In fact, the thoughts only really crystallize when spoken or written down. But, clearly, this linear process does not do justice to our deepest values which are multidimensional. Arguably, such a process cannot represent our values accurately, much less enable us to assess or justify them.
We can, of course, describe, catalogue and consider the various political outlooks which others have elaborated and defined, seeing them as more or less internally consistent and competing frameworks. But, unfortunately, all these frameworks are – necessarily – highly simplified conceptual structures which are inadequate not only as models for how the (social and political) world works (or could work), but also as representations of the actual political beliefs and values of individuals and groups.
They are arguably post hoc rationalizations, and their main function, you could say, is to faciltate the formation of, and deepen solidarity within, social and political groupings. Part marketing tool, part reinforcement mechanism.
What I am saying essentially is that such frameworks are inevitably inadequate as serious belief systems.
But, though the various –isms are no good, the adjectives from which they derive do real and important work. So I think one can still usefully talk about conservative approaches to social, political and other questions, and distinguish them from, for example, liberal (or progressive) approaches.
Increasingly I see these matters in terms of individuals having – due mainly to various genetic and developmental factors – different psychological profiles and personality traits. These differences can, of course, be mapped and defined in different ways, but something like a conservative/progressive or conservative/radical contrast will, I think, continue to be a feature of models of human personality and cognition.
One assumption behind most reflections on conservatism (or on any political ideology) is that it is desirable to have a consciously worked out (personal) political philosophy. And the assumption behind this is that it is possible somehow to assess alternatives in a rational manner and arrive at a satisfactory conclusion. This latter assumption – on which the value of the whole exercise depends – I am beginning to doubt.
When you reflect on these matters, you have to start somewhere. And where you start will be somewhat arbitrary, though it may well be in part determined by your values.
For example, you may want to maximize equality; or you may be more concerned with individual freedom; or order, or one of any number of other ideals or goals.
My starting point – reflecting perhaps the importance I place on a scientific view of the world free of metaphysical and religious baggage – would be the social nature of human identity.
Even those who think they have totally rejected the idea of a soul still cling, I believe, to a version of this idea. It is a natural belief for us to have, and I still feel it in myself.
Take this simple thought experiment. A human body could, presumably, be grown in a laboratory, nourished and exercised to develop muscles, etc. But, if it were deprived of all normal social interactions, linguistic and other cultural input, the brain would not develop normally and this body, though apparently perfectly formed and healthy, would not, as a result, constitute a person. It would not have a human identity, or human awareness. What rights would it have, if any?
This idea of a living human body with a radically undeveloped brain (due to the withholding of social inputs during development) is – to me at any rate – slightly shocking and confronting. It tells us something about ourselves: that our sense of self, our human identity comes just as much from without – from a particular social and cultural milieu – as from within. The social matrix within which we grow is an essential component of our individuality and our very humanity. We never were and can never be 'self-contained'.
This fact has implications for any social or political philosophy. I won't try to spell out the implications here, except to say that such a view is fatal for all forms of atomistic individualism.
Values, as well as often determining the starting-point for one's basic thinking about politics, also play a part in determining the direction of the argument. And this basic notion of the social self could clearly be developed in either a progressive or a conservative direction. The choice seems to depend on taste or predilection.
Which leads me to wonder whether developing such thoughts and arguments is worthwhile (other than for polemical or similar purposes).
Moral, social and political reflections and arguments move in a linear fashion like language. In fact, the thoughts only really crystallize when spoken or written down. But, clearly, this linear process does not do justice to our deepest values which are multidimensional. Arguably, such a process cannot represent our values accurately, much less enable us to assess or justify them.
We can, of course, describe, catalogue and consider the various political outlooks which others have elaborated and defined, seeing them as more or less internally consistent and competing frameworks. But, unfortunately, all these frameworks are – necessarily – highly simplified conceptual structures which are inadequate not only as models for how the (social and political) world works (or could work), but also as representations of the actual political beliefs and values of individuals and groups.
They are arguably post hoc rationalizations, and their main function, you could say, is to faciltate the formation of, and deepen solidarity within, social and political groupings. Part marketing tool, part reinforcement mechanism.
What I am saying essentially is that such frameworks are inevitably inadequate as serious belief systems.
But, though the various –isms are no good, the adjectives from which they derive do real and important work. So I think one can still usefully talk about conservative approaches to social, political and other questions, and distinguish them from, for example, liberal (or progressive) approaches.
Increasingly I see these matters in terms of individuals having – due mainly to various genetic and developmental factors – different psychological profiles and personality traits. These differences can, of course, be mapped and defined in different ways, but something like a conservative/progressive or conservative/radical contrast will, I think, continue to be a feature of models of human personality and cognition.
Wednesday, June 5, 2013
Necessary freedom
The mathematician G.H. Hardy – most famous amongst the general public for his having 'discovered' the self-taught prodigy Ramanujan – said that the only other career that might have suited him was journalism.
When I first read this it surprised me, even bearing in mind the fact that journalism in early 20th-century England was very different from journalism today.
Clearly Hardy could write – his short book, A Mathematician's Apology, is a minor classic. But it's very clear from that essay that his identity was inextricably bound up with being a mathematician, and nothing else.
Late in life he attempted suicide, not just because of the general effects of failing health but also – and perhaps mainly – because his mathematical powers had deserted him.
Rather depressingly, he claimed (in his Apology) that most people don't have any significant talent for anything. But "[i]f a man has any genuine talent he should be ready to make almost any sacrifice in order to cultivate it to the full." Anyone, he asserted, who sets out to justify his existence and his activities has only one real defense. And that is to say, “I do what I do because it is the one and only thing that I can do at all well."
Why did he mention journalism, I wonder? It's particularly puzzling because journalism is so utterly different from mathematics generally – and especially from Hardy's style of doing and thinking about mathematics with its focus on timeless beauty.
This is in addition to the fact that mathematics is normally associated with the sciences. So, naïvely, I would expect a mathematician to say that, had he not pursued mathematics as a career, he might have become a scientist or engineer of some kind, for example.
But Hardy, though he was attracted to biology in his youth, exhibited in his adult life no great interest in or high regard for science, and he had a quite negative attitude to applied science. He prided himself on the fact (as he saw it) that his work had no practical applications.
And he disliked new technologies. He had a telephone installed in his house which he ostentatiously avoided using: it was for the use of any guests who fancied that kind of thing.
By journalism Hardy certainly didn't mean writing about scientific (or mathematical) subjects for a general audience. He meant, presumably, mainstream journalism. And my guess is that he was attracted to it for three basic reasons.
Firstly, he recognized that he had a second talent, a gift for writing – and writing with style and wit and conciseness. (He was famous amongst his friends for his postcards.)
Secondly, though scornful of politicians, he did have an interest in politics and was active in a pacifist organization, the Union of Democratic Control, during World War 1. Significantly, one of the leading and most impressive figures involved in this organization was the French-born journalist E.D. Morel.
And last but not least, I suspect that Hardy saw in the lifestyle associated with journalism (as in the academic lifestyle of the time) a kind of freedom which for a certain kind of person is not just desirable but necessary.
Labels:
G.H. Hardy,
giftedness,
journalism,
mathematics
Saturday, June 1, 2013
Cultural innovation, genes, and the origin of language
Simon Fisher and Matt Ridley have argued, mainly on the basis of new DNA sequencing data, that cultural factors were far more significant in driving genetic changes in the evolutionary history of our species – such as those that led to the development of language – than was previously thought.
"The common assumption is that the emergence of behaviorally modern humans [sometime] after 200,000 years ago required – and followed – a specific biological change triggered by one or more genetic mutations."
But the "prevailing logic in the field may put the cart before the horse. The discovery of any genetic mutation that coincided with the 'human revolution' must take care to distinguish cause from effect. Supposedly momentous changes in our genome may sometimes be a consequence of cultural innovation. They may be products of culture-driven gene evolution."
Fisher and Ridley quote obvious, uncontroversial examples where culture has driven genetic change – like lactase persistence amongst dairy-farming communities, and alcohol-tolerance amongst Europeans (who generally drank more alcohol than Asians, for example).
The question of language origins is much more complex, of course, but there is mounting evidence – relating, for example, to variations in the FOXP2 gene in humans and other species – that cultural factors were the drivers of change.
FOXP2 is known to play an important role in human language abilities, but, in considering the roles of FOXP2 in human evolution, it is important to recognize that it has a deep evolutionary history.
"Animal studies indicate ancient conserved roles of this gene in patterning and plasticity of neural circuits, including those involved in integrating incoming sensory information with outgoing motor behaviors. The gene has been linked to acquisition of motor skills in mice and to auditory-guided learning of vocal repertoires in songbirds. Contributions of FOXP2 to human spoken language must have built on such ancestral functions.
"Indeed, further data from mouse models suggest that humanization of the FOXP2 protein may have altered the properties of some of the circuits in which it is expressed, perhaps those closely tied to movement sequencing and/or vocal learning.
"Given these findings, it seems unlikely that FOXP2 triggered the appearance of spoken language in a nonspeaking ancestor. It is more plausible that altered versions of this gene were able to spread through the populations in which they arose because the species was already using a communication system requiring high fidelity and high variety. If, for instance, humanized FOXP2 confers more sophisticated control of vocal sequences, this would most benefit an animal already capable of speech. Alternatively, the spread of the relevant changes may have had nothing to do with emergence of spoken language, but may have conferred selective advantages in another domain.
"FOXP2 is not the only gene associated with the human revolution. However, it illustrates that when an evolutionary mutation is identified as crucial to the human capacity for cumulative culture, this might be a consequence rather than a cause of cultural change. The smallest, most trivial new habit adopted by a hominid species could – if advantageous – have led to selection of genomic variations that sharpened that habit, be it cultural exchange, creativity, technological virtuosity, or heightened empathy.
"This viewpoint is in line with recent understanding of the human revolution as a gradual but accelerating process, in which features of behaviorally modern human beings came together piecemeal in Africa over many tens of thousands of years."
The accumulating evidence alluded to by Fisher and Ridley certainly makes Noam Chomsky's suggestion that language appeared all of a sudden and was the direct result of a genetic mutation look naïve and implausible.
But it also challenges the more mainsteam approaches still favored by many linguists who (influenced, like Chomsky, by traditional rationalism) see the human language faculty in absolute and ahistorical terms.
Descartes saw "la raison" [reason] as being "toute entière en un chacun" [entirely and equally present in each of us], and many linguists still see language in a similar – and strangely metaphysical – way.
"The common assumption is that the emergence of behaviorally modern humans [sometime] after 200,000 years ago required – and followed – a specific biological change triggered by one or more genetic mutations."
But the "prevailing logic in the field may put the cart before the horse. The discovery of any genetic mutation that coincided with the 'human revolution' must take care to distinguish cause from effect. Supposedly momentous changes in our genome may sometimes be a consequence of cultural innovation. They may be products of culture-driven gene evolution."
Fisher and Ridley quote obvious, uncontroversial examples where culture has driven genetic change – like lactase persistence amongst dairy-farming communities, and alcohol-tolerance amongst Europeans (who generally drank more alcohol than Asians, for example).
The question of language origins is much more complex, of course, but there is mounting evidence – relating, for example, to variations in the FOXP2 gene in humans and other species – that cultural factors were the drivers of change.
FOXP2 is known to play an important role in human language abilities, but, in considering the roles of FOXP2 in human evolution, it is important to recognize that it has a deep evolutionary history.
"Animal studies indicate ancient conserved roles of this gene in patterning and plasticity of neural circuits, including those involved in integrating incoming sensory information with outgoing motor behaviors. The gene has been linked to acquisition of motor skills in mice and to auditory-guided learning of vocal repertoires in songbirds. Contributions of FOXP2 to human spoken language must have built on such ancestral functions.
"Indeed, further data from mouse models suggest that humanization of the FOXP2 protein may have altered the properties of some of the circuits in which it is expressed, perhaps those closely tied to movement sequencing and/or vocal learning.
"Given these findings, it seems unlikely that FOXP2 triggered the appearance of spoken language in a nonspeaking ancestor. It is more plausible that altered versions of this gene were able to spread through the populations in which they arose because the species was already using a communication system requiring high fidelity and high variety. If, for instance, humanized FOXP2 confers more sophisticated control of vocal sequences, this would most benefit an animal already capable of speech. Alternatively, the spread of the relevant changes may have had nothing to do with emergence of spoken language, but may have conferred selective advantages in another domain.
"FOXP2 is not the only gene associated with the human revolution. However, it illustrates that when an evolutionary mutation is identified as crucial to the human capacity for cumulative culture, this might be a consequence rather than a cause of cultural change. The smallest, most trivial new habit adopted by a hominid species could – if advantageous – have led to selection of genomic variations that sharpened that habit, be it cultural exchange, creativity, technological virtuosity, or heightened empathy.
"This viewpoint is in line with recent understanding of the human revolution as a gradual but accelerating process, in which features of behaviorally modern human beings came together piecemeal in Africa over many tens of thousands of years."
The accumulating evidence alluded to by Fisher and Ridley certainly makes Noam Chomsky's suggestion that language appeared all of a sudden and was the direct result of a genetic mutation look naïve and implausible.
But it also challenges the more mainsteam approaches still favored by many linguists who (influenced, like Chomsky, by traditional rationalism) see the human language faculty in absolute and ahistorical terms.
Descartes saw "la raison" [reason] as being "toute entière en un chacun" [entirely and equally present in each of us], and many linguists still see language in a similar – and strangely metaphysical – way.
Subscribe to:
Comments (Atom)
