▼
Tuesday, December 24, 2019
Language and thought: some metaphysically skeptical reflections
Conceptual frameworks are always provisional
The logical positivists took a very hard anti-metaphysical line. They were right, in my view, to see traditional metaphysics as being futile and pointless. The essential problem with metaphysics is epistemic. How (given a basically scientific view of the world) can purely metaphysical statements be justified? Rudolf Carnap and most of his Vienna Circle colleagues didn’t think they could and consequently saw no place for metaphysics as a serious discipline.
There is no denying that fundamental, foundational and relational questions arise naturally in the course of scientific and other forms of rigorous inquiry. These kinds of questions are not only worthwhile, they are necessary and inescapable, and to call them philosophical or (in certain cases) metaphysical is not out of line with common usage. Problems arise, however, when philosophical or metaphysical thinking becomes detached from empirical reality and begins to feed on itself.
In the 1940s and ’50s, Carnap articulated a nuanced account of ontological claims in the context of mathematical and scientific inquiry. He saw such claims as being either trivially true or false (if considered within the theoretical framework in question) or nonsensical (if not). The former were associated with “internal” questions, the latter with “external” questions. Internal questions are asked with a particular framework in mind. Do numbers exist? Within the framework of arithmetic, (trivially) yes. But do numbers really exist in some absolute sense? The question, arguably, is meaningless.
This approach works for formal disciplines and strictly scientific theoretical concepts but the sciences are not entirely formal. They have their origin in our interactions with, and natural curiosity about, the world. It is a mistake to imagine that we are ever entirely locked into specific and rigid linguistic or theoretical frameworks. Frameworks are fluid and necessarily provisional.
Ordinary thinking is an element of our engagement with the world and is never entirely mechanical or formal. It is not formal because interpretation of one kind or another is always involved, in the sciences and elsewhere. And it is holistic in the sense of not being comprised of discrete levels or completely self-contained modules.
Not only are various parts of the brain interconnected in complex ways, the broader physical (somatic and extra-somatic), social and cultural matrix within which neural processing occurs and upon which it depends is also holistic and massively interconnected. Our thinking cannot be separated from this broader physical and cultural context. This fact has important implications for how we think about thinking.
An ability to conceptualize and deal in a practical way with a wide range of contingencies involves various forms of thinking and meta-thinking. My focus here is on aspects of thinking and meta-thinking which relate respectively to language and number.
Metalinguistic awareness
Alfred Tarski developed the notion of metalanguage, though he was concerned mainly with formal rather than natural languages. Karl Popper explicitly drew on Tarski’s concept of metalanguage to defend a form of the correspondence theory of truth. The linguist Roman Jakobson appealed to the same basic idea when, late in his career, he outlined what he saw as the functions of language. One of these was the metalingual (or metalinguistic) function. It applies when a language is used to talk about itself.
The notion of metalinguistic awareness is often employed in discussing such phenomena as code-switching and language alternation. But metalinguistic awareness also applies to phenomena which occur in strictly monolingual environments. As noted above, languages are routinely used in a reflexive way (i.e. to refer to themselves). What's more, a speaker’s awareness of implied as distinct from literal meaning and the use and understanding of various figures of speech also require a certain level of metalinguistic awareness.
Using and understanding irony requires a relatively high level of metalinguistic awareness. Sarcasm is less subtle than irony but provides a clearer illustration that what is literally being said is not always what is actually being said, the intended sense being (in the case of sarcasm) the converse of the literal sense.
Gödel's incompleteness theorems
The general idea that a broader context always obtrudes applies not just to ordinary life and language use but also to specialized scientific and scholarly work. No significant area of study is self-contained. Not even formal disciplines, such as arithmetic.
Gödel’s work demonstrated the limitations of formal axiomatic systems. He showed that no such system is capable of proving all truths about the arithmetic of natural numbers. He also demonstrated that no formal system which is complex enough to model basic arithmetic can prove its own consistency.
Formal systems then (at least those beyond a certain level of complexity) are not self-contained, not sufficient unto themselves. They are necessarily situated in – and in a real sense are dependent on – a broader context. And any expanded system is dependent on a yet broader context in the same way.
Gödel was a Platonist and saw his incompleteness theorems as vindicating his position on the power of the human mind. The main lesson I take from his work, however, is that productive thinking is necessarily contingent rather than self-contained; that it necessarily engages with a wider world.
What this wider world consists in or of is open to debate. It comprises seemingly very different kinds of things and/or processes: the processes studied by mathematicians; the physical processes studied by physicists and biologists; social and cultural processes; etc..
But on what basis – other than practicality or convenience – do we draw dividing lines between these different kinds of process (and, by extension, between disciplines)?
Mundane concepts
Because of the problems of justifying metaphysical statements I prefer to remain metaphysically agnostic and to avoid making claims about the world which go beyond common sense, common usage and the findings of science and scholarship. Neither ordinary thinking nor rigorous intellectual inquiry requires an explicit metaphysical foundation. Effective thinking, speaking and research do have prerequisites, but such a foundation is not one of them.
Sure, our natural habits of thought involve implicit assumptions and commitments which are often reflected in the grammar of language. This is something to be aware and wary of, however, not something which should form a basis or foundation for serious metaphysical claims or systematizing.
In respect of the existence or non-existence of entities postulated by scientific theories, Carnap’s approach works well because the theories in question are identifiable and distinguishable one from another. If you move beyond particular theories, however, and focus on mundane concepts which we can approach from many directions and in many ways, there is no single language or system or theory to which we can appeal (and so no clear way of distinguishing internal from external questions). Such mundane concepts include concrete things that we might touch or eat or bump into, as well as more abstract notions and social phenomena.
Even something like the concept of number can be approached and conceptualized in very different ways: via formal arithmetic or via psychology and the social sciences, for example.
And what are we to make of birds that keep track of the comings and goings of their potential prey by counting and remembering how many entered or exited the burrow they are spying on? These predators would not be much interested in questions about the concept of number, but their counting abilities derive from a pattern of neural processing which necessarily represents or instantiates the concept in some form. Arguably, some such primitive, pre-linguistic and pre-theoretical notion of number underlies even our most sophisticated mathematical ideas and capacities.
[This is a revised and abridged version of a piece which was published a few weeks ago at The Electric Agora.]
Saturday, November 16, 2019
A few thoughts on intellectual history, abstraction and values
Terms like “pragmatism” as it applies to philosophy and the history of ideas – most isms really – are intrinsically vague and useful only to the (necessarily limited) extent that they help to bring out persistent or more fleeting strands or commonalities in thinking within or across populations.
Even the views of individuals are often difficult to fathom and characterize accurately. That these views generally change over time, from book to book, from article to article, from diary entry to diary entry, makes the task even harder.
Nor is any one thinker’s work privileged over another’s. Judgments on the intrinsic merits of individual works or of the merits of particular thinkers are very difficult to make in an objective manner.
But individual thinkers can be readily assessed according to the compatibility of their views with the findings of contemporary and subsequent scientific and scholarly investigation. Certain thinkers can also be shown to have been more influential than other thinkers. Unfortunately there is very little correlation between these two vectors. Sometimes it seems that an inverse correlation between compatibility with science and personal influence applies.
In my view, there is no grand narrative and no abiding canon. We find ourselves with respect to the history of ideas – just as we do with respect to any and every aspect of this relentlessly evolving world – in the midst of complexities which can only be satisfactorily “simulated” or modeled by the reality that is generating them. The best we can do is make marginal notes.
One of Louis Rougier’s early books was called En marge de Curie, de Carnot et d’Einstein: études de philosophie scientifique. Marginal notes, you could say, by a marginal figure.
Rougier’s works are not on anybody’s essential reading list today. He made some bad career moves and got pushed aside, but he was a rising star in the 1920s and a very influential figure in the 1930s. You want to pigeonhole him? Not possible, I’m afraid.
Rougier was aware early on of the thought of William James (whom he read in translation). It was James’s version of pragmatism that he singled out for attack as a young man, but which arguably was not all that far from his own developing views. As Rougier himself often pointed out, intellectual history can be seen as a dense network of ironies and contingencies.
____________
There is another angle to this. It relates to the nature of language. I see natural language as something that evolved for specific reasons and which is well suited to certain uses (e.g. storytelling and facilitating and coordinating social interactions of various kinds) and not so well suited to other uses. In particular, I am wary of the dangers of using abstract concepts in a relatively unconstrained way as often happens in theology and philosophy.
Common concrete nouns involve abstraction. There is no instance of a dog that is not also a particular animal of a particular breed or mix of breeds; or of a table that is not of a particular type and size and shape and color etc.. Such words, however, are clearly both semantically constrained and useful. Common abstract nouns are also useful as a sort of linguistic shorthand.
The trouble is that we have a natural tendency to hypostatize concepts and this has led, variously, to myth, ideology, philosophical puzzles and the elaboration of metaphysical systems.
Mythic elements in our thinking are unavoidable. Likewise ideology to an extent. Our brains just naturally generate value-laden narratives. Beyond this we are, as language users, committed to certain rudimentary metaphysical assumptions associated with grammatical structures that can lead to philosophical puzzles or pseudo-problems. But the deliberate construction of free standing or self-contained metaphysical systems is something else again.
Arguably myths, ideologies and metaphysical systems (unless the latter are closely tied to science) lack any real connection to non-human reality. Many metaphysical systems fail even to connect to human reality in any significant way.
Within science, abstractions are necessarily constrained. They play their assigned roles within, and take their meaning from, theories. The abstractions are constrained by the theories, and the theories are constrained by the rules and protocols of the discipline in question and ultimately by empirical evidence.
The formal sciences take us further from natural language than empirical science generally does. Abstractions in mathematics and logic take their meaning from, and so are constrained by, the rules of the systems involved. They are, as it were, contained within the system. What Rudolf Carnap referred to as “external” questions about these concepts are misguided and ultimately meaningless.
I would not want to commit too strongly, however, to a distinction between the formal and the empirical. Many developments point to a blurring of the distinction. For example, pure mathematical structures have long been known to play important roles in modeling physical reality. And, of course, the rapid development of computers and artificial intelligence is changing the nature of mathematics, arguably moving it closer to empirical or applied science. Algorithmic information theory is a case in point, focused as it is both on practical problems and on deep questions concerning the fundamental nature and limitations of computation and mathematical thinking (i.e. on metamathematics).
____________
Moral, social, political and aesthetic values and convictions can be described but ultimately cannot be derived or justified scientifically. Moreover, discursive reason cannot be applied in any really comprehensive or extensive way to normative questions without creating drastic distortions and oversimplifications. Discursive reason operates on one level; values on another.
Whenever I read a philosophical text on normative questions which is framed in terms of arguments mounted in standard philosophical style I rarely get beyond a couple of paragraphs before a move is made which appears unmotivated or to which I object for one reason or another. Historical approaches make more sense to me, especially when they are (or aspire to be) purely descriptive.
[This piece is an extract (slightly modified) from an essay of mine which was published at The Electric Agora.]
Friday, August 23, 2019
Scholarship and activism
David Ottlinger has (as I see it) sound intuitions about the nature of postmodernism and other unfortunate intellectual fads and fashions but, as a committed Kantian, he inhabits a very different intellectual world from the one I inhabit. Given his Kantian assumptions, it is no surprise that he has very different views from mine concerning what philosophy and serious scholarship more generally is or should be. He recently wrote a piece claiming that "all philosophy is activist philosophy."
He mentions, amongst others, the Utilitarians: "Jeremy Bentham and John Stuart Mill were not just writing about prison reform. They wanted to reform actual prisons. English prisons. On Fleet Street. Will anyone raise their hands and say that their works are unphilosophical? Or unscholarly?"
Certainly much of the writing of Bentham and Mill is philosophical (encompassing social philosophy, philosophy of law, etc., etc.). This work may plausibly be deemed "scholarly". But there are different senses of the term "scholarship" and its cognates.
This definition of scholarship (Collins English Dictionary) picks out what I see as essential: "Serious academic study and the knowledge that is obtained from it." Unlike science, scholarship is text based. But, like science, it is all about research, about building a body of soundly-based knowledge. It is not (primarily) about changing the world. This distinction (between knowledge and understanding on the one hand and social action on the other) matters.
Ottlinger writes:
Even at their most abstract, most philosophers want to change the world. I have always assumed that Christine Korsgaard actually wants to build the kingdom of ends. Axel Honneth actually wants us to recognize one another.
Philosophers are indeed often most concerned with normative questions, and often this is associated with a desire to change the world in particular ways.
My point is simply that there is a tension between such approaches and traditional, secular notions of scholarship. What scholars (as people) value or want is irrelevant to judging the value of their work as scholarship.
Much writing – old and new
– which is classed today as philosophy is decidedly not scholarship (at least in certain well-accepted senses of the term). This does not necessarily mean that it has no value, of course.
Without his scholarly training, Nietzsche would not have been Nietzsche, and you could say that he was a "scholarly" writer. But the work for which he is known is not scholarship.
Here Ottlinger sets out his views concerning what ("in part") philosophy is:
Even when they don’t have consequences that would dictate changes in our material circumstances or our politics, philosophical ideas matter. They shape our attitudes and values. Two people can be sitting staring at a book, occasionally turning the page, yet only one of them is reading. In the same way two people can be going through the same lives, working the same jobs, having the same sort of families but yet have deeply different inner experiences. They might be leading totally different interior lives. One might be rich and fulfilling, the other barren and empty. Philosophy deals, in part, with these kinds of differences.
The emphasis – as with religion – is on the "inner life". This aspect complements the previously-discussed activist aspect, presumably.
Like David Ottlinger, I am interested in values, but I don't see them as being accessible or amenable to reason (and, by extension, scholarship) in quite the way he does.
Friday, July 12, 2019
Lee Smolin's realism
Lee Smolin is a respected physicist who has always had strong philosophical interests and convictions. He recently articulated his realist views in a public lecture. What follows are my notes on his lecture mixed in with a few comments and observations.
Smolin is strongly opposed to postmodernists who reject the notion of objective truth and who see reality as a social or historical construct. He draws parallels between the anti-realism of postmodernists and the anti-realism of certain physicists associated with the development of quantum mechanics (QM) and the so-called Copenhagen interpretation.
Smolin claims (as Einstein did) that QM is an incomplete theory and so, in a real sense, wrong. The key problem with QM as Smolin sees it is the so-called “measurement problem”. It relates to the notion of wave–particle duality and the two laws or rules that QM provides to describe how things change over time. Rule 1 or law 1 says, in effect, that (except during a measurement) the wave evolves smoothly and deterministically (somewhat like a wave on water). This allows the system to simultaneously explore alternative histories which lead to different outcomes all of which are represented by the smooth flow of the wave. Rule 1 applies when you are not making a measurement. Rule 2 applies only when you make a measurement.
Smolin argues that the 2nd rule means that QM is not a realist theory. If we (or other observers) were not around, only rule 1 would apply.
One of the main developers of the theory, Erwin Schrödinger, was uncomfortable with the theory and its implications. He crystallized his doubts in the form of the famous live/dead cat-in-the-box thought experiment (which is explained by Smolin in his talk (starting at 39.54)).
Niels Bohr, in contrast to Schrödinger, embraced the paradoxical nature of QM, partly because it fitted in with ideas which he had developed previously. Bohr’s notion (or philosophy) of complementarity was shaped by these ideas and by the observed behavior of elementary particles. Sometimes such particles seem to behave as if they are waves, sometimes as if they are particles and, crucially, how they are observed to behave depends on the details of how we go about observing them.
Smolin takes an unequivocally negative view of Bohr’s metaphysical views as well as of the views of Bohr’s protégé, Werner Heisenberg. Here he is on the former:
“Now, of course, Bohr had a lot to say about things being complementary and in tension all the time and you always have to have two or more incompatible viewpoints at the same time to understand anything, and that especially goes […] for knowledge and truth and beauty. And he got off on the Kabbalah, of course. Anyway [long pause] … it doesn’t cut it with me.”
For Smolin, QM's incompleteness is intimately bound up with its incompatibility with realism.* QM is not consistent with realism because the properties it uses to describe atoms depend on us to prepare and measure them.
“A complete theory,” insists Smolin, “should describe what is happening in each individual process, independent of our knowledge or beliefs or interventions or interactions with the system.” He is interested in understanding “how nature is in our absence.” After all, we were not around for most of the history of the universe.
Smolin defines realism as the view that nature exists independently of our knowledge and beliefs about it; and that the properties of systems in nature can be characterized and understood independently of our existence and manipulation. Our measuring etc. “should not play a role in what the atoms and elementary particles are doing.” What he means, I think, is that our interventions should not play an essential or crucial role in the descriptions and explanations which our theories provide.
“A theory can be called realist,” Smolin explains, “if it speaks in terms of properties whose values do not require us to interact with the system. We call such properties “beables”.”
By contrast, a theory whose properties depend on us interacting with a system is called operational. Such properties are called “observables”.
Observables are defined as a response to our intervention. Beables, by contrast, are not defined as a response to our intervention. They are just there, it seems.
But how do we get to know the values of these properties unless we interact with the system? Also, there is the framework question. Properties and values arguably only exist within the context of a particular perspective or theory. In order for properties and values to be properties and values, we need to conceptualize them as such. I will ignore this broader question, however, and focus on what Smolin means by interaction.
Even ordinary observations (like seeing or hearing or recording something electronically) involve us or our measuring devices interacting in some way with the system we/they are observing/recording. Smolin appears not to be concerned with such interactions here because, although the nature of the observer’s perceptual apparatus and/or the nature and settings of the equipment being employed determine or pick out what is and what is not being observed or recorded, the results are otherwise quite independent. The type of datum is determined by the nature of the observer or the observing or recording process, but not the data themselves.
In the case of experiments with elementary particles, however, the situation is subtly – and sometimes dramatically – different. Interactions are such that they determine, or play an active role in determining, the values in question.
Arguably, ordinary cases of measurement and observation do not pose problems for the commonsense realist. But if our observations alter in a material way whatever it is which is being observed – as appears to be the case in respect of the quantum realm – problems arise.
Operationalism was first defined by the physicist Percy Bridgman (1882–1961). The book in which he elaborated his views, The Logic of Modern Physics, was published in 1927, the same year QM was put into definitive form. Bridgman’s philosophical approach has much in common with the instrumentalism which characterized the views of the majority of thinkers (physicists, logicians, philosophers) associated with logical positivism. Bridgman was in fact personally involved in the activities of the Vienna Circle.
It was the physicist John Bell who introduced the concept of beables. According to Bell – and according to Smolin – it should be possible to say what is rather than merely what is observed. This is all very well but – quite apart from philosophical arguments questioning the notion of a noumenal world – experimental results continue to come out against the realists. Experiments with entangled particles, for example, seem to exclude the possibility of any form of local realism. Some form of nonlocal realism is still very possible however.
Smolin is at his weakest when he talks history. The story he tells about the generation of physicists who grew up during the Great War is hard to swallow. It seems that they were predisposed to anti-realism by virtue of the unusual circumstances of their early lives. They had witnessed at an impressionable age the destruction of the social optimism of the 19th century, and so were skeptical of rationality and optimism and progress. They had lost older brothers and cousins and fathers and uncles and had “nobody above them ...” No wonder they didn’t believe that elementary particles etc. have properties which are independent of our interactions with them!
You would think that the fact that Niels Bohr, the father of the Copenhagen interpretation, was not a part of this generation would sink Smolin’s generational explanation from the outset. As would even a cursory knowledge of the history of 19th century thought which is shot through with various forms of idealism, anti-realism and radical empiricism. The phenomenalist philosophy of science of Ernst Mach (1838–1916) is a case in point. At the end of the 19th century, Mach articulated ideas which were later picked up by the thinkers Smolin is criticizing.
Smolin explicitly recognizes that Bohr’s main ideas were formed well before the development of quantum mechanics and that he was influenced by 19th century thinkers – including by Kierkegaard (whom Smolin clearly does not hold in high esteem).
Smolin quotes some of Bohr’s claims:
“Nothing exists until it is measured.”
“When we measure something we are forcing an undetermined, undefined world to assume an experimental value. We are not measuring the world, we are creating it.”
“Everything we call real is made of things that cannot be regarded as real.”
Heisenberg followed the same general approach:
“The atoms or elementary particles themselves are not real: they form a world of potentialities or possibilities rather than one of things or facts.”
“What we observe is not nature itself but nature exposed to our method of questioning.”
Bohr said: “We must be clear that when it comes to atoms, language can be used only as in poetry. The poet […] is not nearly so concerned [with] describing facts as [with] creating images and establishing mental connections.” What he meant, presumably, is that the normal referential function of natural language cannot be used in relation to the quantum world, and anything we say about that world (using natural language) will necessarily be a creative construct shot through with metaphor and paradox.
Maybe so. Or maybe not. It is not something we can know a priori. It all depends on how our models develop and on the results of experiments. But, until QM is subsumed into some (hypothetical) broader theory which allows us to envisage quantum processes in more intuitive or realism-friendly ways, Bohr's general views regarding the radical inapplicability of natural language and ordinary logic to quantum events or processes will remain plausible.
* There is also the question of gravity. Quantum field theory brings together QM and special relativity. QM and general relativity have yet to be satisfactorily reconciled, though a line of research associated with the so-called AdS/CFT correspondence – a string theory-based approach – has made considerable progress towards this goal.
This is a revised version of an essay published at The Electric Agora on June 4.
Thursday, May 30, 2019
Knowledge of the past, knowledge of the world
Is it acceptable to distinguish between, on the one hand, an account of the past (whatever kind of account it may be) and whatever it is which such an account is, or purports to be, about? I ask this question because I was challenged for using this form of words in a recent discussion with Daniel Kaufman. Not only would I argue that it is acceptable, I would say that such a distinction (or something very like it) is necessary to make sense of the very concept of truth-telling versus lying, or to make sense of the distinction between history and fiction, or between scholarship and polemics, or between science and pseudo-science (in the context of those sciences which deal with the past).
My point is that using the form of words I did does not necessarily commit me to a particular metaphysical view. One can, I think, use and understand such language and employ such a distinction whilst remaining completely agnostic about the nature of the past: it might be a meaningless concept; it might be a figment of our imaginations; it might be in some sense actual but created and determined, in part or in toto, by us; it might be stable, or it might be shifting and unstable (i.e. dreamlike). Or it might be more or less how the vast majority of humans probably think of it: that is, as existing or having existed quite independently of us and our thoughts and desires; as stable and unalterable; as knowable only imperfectly and in part. Most of the listed options are quite silly, of course, but my point is that you can make the statement I made without necessarily committing to any particular view.
There was a second claim of mine to which Dan took exception, calling it “flat out false… [a]nd obviously so." He elaborated on his objections in an essay and, given that the essay prompted further extensive discussion (more than 160 comments), some may feel that the topic has been done to death. My view, however, is that, some recapitulation and clarification may be useful and help to allay confusion and misunderstanding.
The claim in question [made in the course of online discussion of an essay of mine] was that “the past is what it is (or was what it was).” I could elaborate on what I meant by this but perhaps the best way to unpack the intended meaning without inadvertently bringing in new complications is simply to look at the context in which the claim was made.
A commenter, reacting against my skeptical attitude to the stories historians tell and to my suggestion that we should focus instead on reading for ourselves texts from the past, had queried my use of certain words (‘external’, ‘alien’, ‘arbitrary’) in describing how historians often project their own (political, moral, etc.) preoccupations and values into the stories they tell about the past, preoccupations and values which are often quite alien (as I put it) to the people and societies being described.
“All of these words,” he said, “are puzzling to me: ‘external’, ‘imposing’, ‘alien’, ‘arbitrary’. Consider me unpersuaded.”
“The past is what it is (or was what it was),” I replied. “Our present, from the point of view of the past, does not exist. I am using words like ‘alien’ and ‘external’ to make this point. I assume that we both want to understand the past in its own terms; as it was; distorted as little as possible by present-day preoccupations and perspectives. Thus my concerns (overdone in your estimation) about historians wittingly or unwittingly inserting their own values or the values of their time into the stories they tell."
I said that we want to understand the past “in its own terms” and “as it was.” Taken in isolation I concede that this latter phrase especially could be seen to imply the naive view which Dan ascribes to me. But I explained my meaning in the words which immediately followed: [we want our view to be] “distorted as little as possible by present-day preoccupations and perspectives.”
This is why I recommended focusing on primary sources, reading the actual texts from the past in the languages in which they were written. Will we be able to understand them in exactly the same way their authors understood them? No. Our experiences are very different. But scholars who immerse themselves in the writings of a particular period are able to achieve a very good sense of the perspectives of the original authors.
So when I spoke of the past “as it was” I was not talking about a perspectiveless, abstract or noumenal past at all. I was talking about the actual perspectives of actual people who lived and spoke and wrote and some of whose writings we have access to and are able to read.
There was also some discussion of the very distant past, before the advent of observers. Obviously, envisaging this poses greater problems because you cannot talk about the perspectives of the time, and compare or contrast them with our own. There were no perspectives then.
I want to turn now, albeit briefly, to some broader issues and specifically to a piece written some years ago by Daniel Kaufman entitled “Knowledge and Reality” which came up in the discussions described above. It begins as follows:
“If you were to go to the trouble of asking ordinary people about their views on knowledge and reality – accosting them, at random, on street corners, perhaps – and succeeded in getting honest answers, you would likely discover that they hold something like the following view: What it is to know something is to possess some body of information – to have a “picture” of thing – that squares with or is true to reality. If you were to push further, regarding ‘reality’, they would likely characterize it along the lines of “everything that actually exists” (the ‘actually’ intended to preclude imaginary and fictional things like unicorns and Sherlock Holmes).”
Plausibly, this is what people would indeed say. You could see it as a form of naive realism. But, if my view is (as Dan has suggested) a form of naive realism, it is not this form of naive realism.
The crucial issue here for me is a matter of underlying assumptions and perspective. I see my body as an intrinsic part of the physical world and my “self” as the creation of a (physically instantiated) culture. This culture is just as much a part of reality as anything else.
Cultural products – languages, artworks, nursery rhymes, fiction, music, etc. – undeniably constitute a part of reality; and Sherlock Holmes stories and unicorn legends are part of this reality. Obviously the characters and creatures featured in these stories are not real in the sense that real people or real animals are real (though small infants are unable to grasp this). But, as imagined characters and creatures, they are components of the real (and physically instantiated) cultural matrix in which we happen to exist. A cultural matrix of some kind is, of course, a necessary condition for our existence as persons and for our functioning as human beings.
The world is a single world. (At least I see no reason to think otherwise.) It includes myself and others and language and culture as well as all the fundamental processes upon which physics and other sciences are focused.
Though all worthwhile discourse will (in my opinion) be consistent with the findings of science, it will not necessarily be scientific, even in a broad sense of the word. The trick (as I see it) is to feel the pulse and appreciate the potency of language and other mechanisms of cultural expression without metaphysicalizing these processes in any way. (Without falling for Romantic myths about art and artists, for example.)
[This is an abridged version of an article originally published at The Electric Agora.]
Monday, April 1, 2019
Scientism
At The Electric Agora I recently discussed the views of Alex Rosenberg and some other thinkers on science, knowledge and consciousness. Rosenberg embraces the term 'scientism' (which, of course, was coined as a derogatory term) as being descriptive of his point of view. I highlighted serious problems with Rosenberg's approach, but there are some aspects of it with which I agree.
The term 'scientism' is used in different ways. In a dialogue between Dan Kaufman and Massimo Pigliucci which was linked to in the comment thread, reference was made to a Scientia Salon article which specified 26 different meanings of the term.
As I understand it, 'scientism' referred originally to the clearly inappropriate use of scientific (or science-like) methods; to the application of such methods to areas where they cannot be made to work effectively (such as normative ethics, for example). I strongly reject scientism in this sense.
In the past I have emphasized the pitfalls of polysemy. Philosophical debate in particular routinely involves -- and in fact is often driven by -- confusions about meaning, with interlocutors imperceptibly sliding from one meaning of a term to another in the course of the discussion. The essential vagueness not only of the terms of ordinary language but also of most philosophical terms needs to be recognized. The tendency of philosophers to imagine that philosophical terms can be made precise in the way scientific or mathematical terms can be made precise, which leads more often than not to a process of semantic hair-splitting and an unproductive proliferation of points of view, can in fact be seen as a form of scientism in the original sense of the term.
But the term has come to be applied to the views of those whose only sin is to have a high regard for science and who question the worth and validity of certain kinds of discourse (like theology, for example). In the dialogue mentioned above, Massimo Pigliucci talks about the way religious thinkers insist on “other ways of knowing” and use the term ‘scientistic’ to label those who reject these purported ways of knowing.
How you see intuitions is crucial here. Obviously they are important in practical life and for theoretical conjectures. But they need to be tested using rigorous scientific or scholarly methods if they are to be incorporated into our body of scientific and scholarly-historical knowledge.
If such a view constitutes scientism, I happily embrace the label. But, given the confusion surrounding the term, we would probably be better off dispensing with it altogether.
The term 'scientism' is used in different ways. In a dialogue between Dan Kaufman and Massimo Pigliucci which was linked to in the comment thread, reference was made to a Scientia Salon article which specified 26 different meanings of the term.
As I understand it, 'scientism' referred originally to the clearly inappropriate use of scientific (or science-like) methods; to the application of such methods to areas where they cannot be made to work effectively (such as normative ethics, for example). I strongly reject scientism in this sense.
In the past I have emphasized the pitfalls of polysemy. Philosophical debate in particular routinely involves -- and in fact is often driven by -- confusions about meaning, with interlocutors imperceptibly sliding from one meaning of a term to another in the course of the discussion. The essential vagueness not only of the terms of ordinary language but also of most philosophical terms needs to be recognized. The tendency of philosophers to imagine that philosophical terms can be made precise in the way scientific or mathematical terms can be made precise, which leads more often than not to a process of semantic hair-splitting and an unproductive proliferation of points of view, can in fact be seen as a form of scientism in the original sense of the term.
But the term has come to be applied to the views of those whose only sin is to have a high regard for science and who question the worth and validity of certain kinds of discourse (like theology, for example). In the dialogue mentioned above, Massimo Pigliucci talks about the way religious thinkers insist on “other ways of knowing” and use the term ‘scientistic’ to label those who reject these purported ways of knowing.
How you see intuitions is crucial here. Obviously they are important in practical life and for theoretical conjectures. But they need to be tested using rigorous scientific or scholarly methods if they are to be incorporated into our body of scientific and scholarly-historical knowledge.
If such a view constitutes scientism, I happily embrace the label. But, given the confusion surrounding the term, we would probably be better off dispensing with it altogether.
Monday, January 28, 2019
Culture and language: some personal reflections
Much is written about shared narratives and their role in creating a common culture. But what is a culture?
The idea of a common culture – whether that culture is defined in regional, national or supranational terms – is an idealization and necessarily vague and imprecise. This fact needs to be recognized. But it does not entail that clear and definitive claims about culture cannot be made. One way of making claims more precise is to focus on specific cultural elements. My background in comparative literature and linguistics leads me to focus on language.
Language can be seen from a broadly literary perspective on the one hand or from a scientific perspective on the other. The former perspective motivates my views to a large extent and provides a partial conceptual framework (based on certain intellectual and literary-historical traditions). But linguistics extends and strengthens the conceptual framework and provides a bridge to cognitive and evolutionary science.
Language is central because, without language, distinctively human forms of social practice would not have arisen. Take early ritual burials, often seen as markers of emerging human consciousness. Clearly, some kind of shared narrative is at work here; a shared notion of an afterlife for which the deceased is being prepared. Such a sophisticated narrative could not have existed before our ancestors developed a capacity for language.
There is a huge gulf between the linguistic or semiotic capacities of humans and other animals. All known human languages (apart from pidgins) share an equivalent degree of structural and grammatical complexity. Presumably language did not come into existence all at once and fully-formed, but evidence for hypothetical, intermediate forms is unavailable and we can only speculate regarding the communicational powers of pre-modern humans and other hominins.
There is an important distinction between a (natural) language and the more general and abstract concept of human language which parallels the distinction between a culture and human culture in general. All actual linguistic phenomena occur within a specific linguistic context, of course. But languages share common elements and/or structural features with other languages, so the idea of a language is not a simple one and is not without its problems.
In what sense does a language exist as distinct from particular instances of language use? Spoken words and written texts are generally assignable to this language or that, but precise boundaries are impossible to draw. Grammars and dictionaries try to do this but they can never reflect the constantly shifting contours of actual linguistic practice which always depend on the knowledge and behavior of individual speakers. In the final analysis, then, what we have is a set of unique and (to a greater or lesser extent) overlapping idiolects. We find it convenient, however, to group sets of idiolects into what we call dialects or languages. (Noam Chomsky and many other linguists have explicitly endorsed this idea.)
If a language is difficult (or impossible) to define, the notion of “a culture”, being more general, is even more problematic. But something similar to an idiolect-based approach can help us out. Each of us can be seen to represent a unique cultural mix. What we call “a culture” is represented by a set of (potentially communicating) individuals whose cultural knowledge and practices are similar in certain respects.
Though only limited precision is possible when talking about particular cultures, it helps if the primacy of the individual (in the sense explained above) is borne in mind. Consequently, a bit of personal history may help to flesh things out.
I went to a high school which had a strong classical focus. Latin was considered an important subject. We read Book IV of Virgil’s Aeneid, the letters of the Younger Pliny (not recommended) and extracts from Julius Caesar’s Commentarii de Bello Gallico (an account of his military campaigns in Gaul).
The older boys had studied classical Greek as well as Latin, but Greek was phased out. We were aware that Latin also was being marginalized in the broader educational culture. Fewer and fewer students were taking up Latin and, of those who did, fewer and fewer were taking it through to their final high school years.
Language lies at the heart of culture and knowing Latin gave us a sense of being part of a long tradition of Western cultural life. Being exposed to the actual words of cultural forebears who lived in a world untouched by Christian philosophy and yet which did not seem completely alien challenged us in subtle ways. This is an aspect of classical learning which is not always appreciated. Classical values (despite attempts by later thinkers to Christianize them) are opposed in quite fundamental ways to the moral spirit of the New Testament and, by extension, to the underlying values of the many social and political movements that were founded upon and driven by secularized versions of Biblical ethics and eschatology.
Elements of classical culture permeated ordinary life in ways that are difficult to conceive today. The details, taken individually, seem trivial: Latin words and phrases were used in English more than they are now; likewise classical references in English idioms were once more common (like “crossing the Rubicon”). And historical figures were routinely alluded to. I don’t know if “Great Caesar’s ghost!” was ever actually a common exclamation, but it certainly was a successful 20th-century popular culture meme. Significantly, in the 1990s television series, Lois and Clark: The New Adventures of Superman, the Perry White character says, “Great shades of Elvis!” instead of “Great Caesar’s ghost!”.
One of the things that characterized European cultures in previous centuries was a fairly widespread knowledge of Greek and Roman myths and legends. You can’t read much literature in English or other modern European languages or appreciate the visual arts without at least a cursory knowledge of these stories.
Strangely, even in the 20th century, the names and images of Greek and Roman (and Scandinavian) gods and heroes were very popular and effective marketing tools for selling consumer products. Or even football teams (e.g. Ajax Amsterdam).
As a love goddess, Venus was always popular. Some years ago, a local firm, Venus Packaging, got rid of their old, sexist logo which incorporated a shapely silhouette with the tagline: “That’s packaging!” On a more sober note, STDs used to be called venereal diseases.
Fables (going back to Aesop and beyond) and fairy tales were generally better adapted for children than Greek myths and were woven deep into the fabric of life. I recall getting off a train at London’s North Wembley station (which was near where I then lived). I was in the back carriage and had a long way to walk down the platform to the exit gate. As I passed through, an elderly white woman was telling the story of “The Tortoise and the Hare” to the young, black ticket collector (an immigrant from the Caribbean). Obviously, she had made an allusion to the fable (she being the tortoise), which he had not understood. It was a poignant scene, especially given the race-based social and political frictions which were beginning to manifest themselves in parts of London and other English cities.
A sense of regret for the loss of shared stories and traditions has nothing to do with racism. It applies within all ethnic or racial groups and across them. But it also reflects a particular view of culture which my literary education happened to reinforce.
“The term culture,” wrote T.S. Eliot in his Notes Towards the Definition of Culture, “includes all the characteristic activities and interests of a people; Derby Day, Henley Regatta, Cowes, the twelfth of August, a cup final, the dog races, the pin table, the dart board, Wensleydale cheese, boiled cabbage cut into sections, beetroot in vinegar, 19th-century Gothic churches and the music of Elgar. The reader can make his own list …”
As I see it, without a rich, common culture, not only does a society become less interesting, it becomes less resilient. It fractures. And this is precisely what we are seeing in the United States and many Western countries today.
No doubt there are many causes of and explanations for the social and political problems we are currently witnessing, but the marginalization of shared, traditional stories is undeniably a significant factor. Given the nature of our brains – given that they are narrative-consuming and narrative-generating engines and that our sense of self and meaning and purpose are narrative-dependent – the loss of one set of stories will make space for another. How one characterizes and interprets the current changes will depend on one’s ideology which in turn depends on the stories which one has internalized over a lifetime.
A part of me (my non-scientific, emotional side) sees a toxic mix of manufactured slogans coupled with ad hoc narratives rushing to fill the vacuum left by the loss of traditional and organic modes of thought and practice.
This judgment is tempered, however, by an awareness of the essential transience of languages and cultures, and a belief that what is truly valuable in what has been lost, culturally speaking, will – for as long as humans continue to exist and thrive – always manage to find new forms of expression.
[This is a revised version of an essay first published at The Electric Agora.]