Wednesday, May 7, 2025

Who wrote the works of Shakespeare?


The Shakespeare authorship question was not a topic I had ever taken a special interest in. As a student I simply accepted the conventional view. Over time, however, I realized that a rigorous, evidence-based case for the glover’s son from Stratford-upon-Avon as author of the works in question had never, in fact, been made and that there were strong reasons to believe that the name associated with these works was a pseudonym and was generally understood to be such by those in the know at the time.

One strong reason to doubt that the man from Stratford wrote these very significant works is the complete lack of a literary paper trail generated during his lifetime, documentary sources (letters, official documents, etc.) which attest to his having been a writer. There is plenty of such evidence for other writers of the time, even for relatively minor figures.

Lately I have been having another look at the Shakespeare authorship debate and was surprised how polarized and rancorous it has become. I will, however, refrain from speculating on the causes for this and simply set out my current position as clearly and concisely as I can, touching on some broader implications of the debate.

First of all, it’s worth noting that the literary-academic mainstream position has shifted. It is now generally acknowledged that significant parts of the Shakespearean dramatic corpus were written by other authors. Co-authorship, whether through actual collaboration or through revision or supplementation of plays by other hands, was common practice at the time and the traditional view of there being a single author of the plays in question is simply no longer tenable. The question (for the plays at least) is now one of primary or principal — rather than sole — authorship.

If I were to delve more deeply into the authorship question, my focus would probably be the poems rather than the plays. For one thing, single authorship for poems is the norm, but there are other reasons. Crucially, it was the publication of the narrative poems “Venus and Adonis” (in 1593) and “The Rape of Lucrece” (in 1594) which first brought the name William Shakespeare to public prominence in a literary context: both publications bore a clear authorial designation and were extremely popular. (“Lucrece”, by the way, is a richer and more accomplished work than “Venus and Adonis”; the latter I find forced, artificial and rather tedious — though interesting from a psychological and moral point of view.)

In my view, doubts may be raised about the conventional view that the man from Stratford wrote the Shakespearean oeuvre purely on the basis of his station in life and the style and content of the poems which are clearly the work of someone steeped in classical literature and Renaissance art (especially painting).

There are indications also of an acute political awareness. As I see it, much of the poetry and the best of the plays are the work of a disillusioned insider attuned to and caught up in the tides and currents of national politics rather than that of an upwardly-mobile provincial with very little education.

With respect to politics, it’s well to bear in mind that politics and religion were intimately intertwined at the time and (given the upheavals of the Reformation) necessarily transcended national borders. The years leading up to Queen Elizabeth’s death (and the all-important succession) were fraught with social and religious tensions, fears of invasion and civil war, widespread censorship and ruthless political repression.

As I have come to see it, many seemingly obscure or puzzling passages in Shakespeare’s works can be seen to represent the bubbling up of a potent political subtext which would have been clear to informed contemporary readers or audiences. In repressive and dangerous times, a large degree of obliqueness is required. For example, the ploy of setting stories in other times and places is a time-honoured way for writers to provide a certain amount of cover and space for dealing with hot political topics.

The lack of any solid evidence based on contemporary sources for the traditional authorship theory calls at the very least for a degree of skepticism regarding the standard theory but the sorts of considerations I have mentioned above have led me beyond a mild skepticism to the conviction — still largely intuitive but bolstered by a certain amount of research and reading — that the man from Stratford-upon-Avon authored neither the poems nor the plays attributed to William Shakespeare.

The matter will not be finally settled, of course, until we know more about who did write these works, until such time as compelling cases connecting a particular author — or set of authors — to large parts of the canon are made, with each such case incorporating the latest historical findings as well as rigorous textual and linguistic analysis. Stylometric analysis has been used clumsily or misapplied in the past but (mainly due to its usefulness in the area of counter-terrorism and other forensic applications) the discipline of stylometry has attracted funding and advanced significantly in recent years. Quantitative, corpus-based approaches, used judiciously and in conjunction with traditional forms of philological and stylistic analysis, have already changed the research landscape and will no doubt lead to more progress on the Shakespeare authorship front, if not to definitive conclusions.

In the broad scheme of things, historical authorship questions are of limited interest and importance but the rise and persistence of what certainly was and continues to look very much like a cult surrounding the Stratford man has serious implications for how certain groups and institutions will be perceived and judged. If things play out as I think they will, hard questions would rightly be asked about the state and status of our literary and scholarly institutions.

Bear in mind that English literature was not taught in universities or seen as forming the basis of a distinct scholarly discipline until the late 19th century. And when it was eventually introduced to the universities it battled to gain respect. Obviously it would be quite damning for the literary-academic establishment and for the institutional framework which it has created over the last century or so if these groups and institutions were seen to have embraced, promoted and perpetuated a totally false narrative about the creation of some of the most important literary works of all time.

Worse, this narrative (at least in terms of its core elements) has until now been treated within the literary-academic world as sacrosanct and not to be questioned. How unscientific is that?

Of course, many of today’s literary scholars see their field — and the humanities more generally — not only as not being continuous with science but even as being in direct opposition to science and scientific ways of thinking and seeing the world. This is a profound mistake in my opinion.

Moral, aesthetic and political judgements are crucially important and are not amenable to purely scientific methods of assessment. But civilization is built on a division of labour, and the various professions are trusted to develop and deploy expertise in specific areas and to be a source of reliable knowledge for the broader population. How else can charlatans and their false claims be effectively exposed?

Myths and sacrosanct narratives are not just top-down phenomena, of course, being sustained and nourished by complex social dynamics involving, amongst other things, politics (broadly construed), in-groups and out-groups, pecking orders and personal egos.

I recall to my shame, soon after graduating with an M.A. in English, pompously dismissing the Shakespeare-authorship doubts of a non-literary friend. In effect, I took his comments as an opportunity to assert my (relative) authority and status. A far more productive — and mature — response would have been simply to try to draw him out on what had caused him to reject the traditional story.

[This piece first appeared last month on my Substack site (markenglish.substack.com). You're welcome to visit and, if you care to, to subscribe in order to receive future posts (by email or alternatively on the Substack app) as they appear. Subscriptions are free and it's easy to unsubscribe.]

Wednesday, September 4, 2024

Language does not "convey thoughts"

I am reprinting here a recent Substack article of mine in which I try to articulate in a very concise way some ideas and intuitions I have been developing about language and communication.

Some months ago I mentioned that, after having spent some time in countries where proficiency in English is not widespread, it was a relief to be in a country which presented no linguistic challenges for me. My focus in England has been on acclimatizing myself to English ways. To say “reacclimatizing” would probably be misleading, given how long it is since I’ve been here and how much things have changed in the intervening decades. I may write about shifts in the social and political mood and changes in the wider culture at a later date but, for now at least, I want to steer clear of politics and related matters.

Lately I’ve been reviewing my views on language, thought and communication, and what follows is a brief statement of my general perspective on the relation between language and thought and the implications of such a view for human communication. Even when there is no “language barrier” in the usual sense of that phrase, effective communication is far from guaranteed. In fact a shared language will often contribute to creating an illusion of agreement, obscuring profound differences in point of view.

In a footnote to one of his essays, Aldous Huxley (obviously recalling a personal experience) talked about an unbridgeable gulf suddenly opening up between two men engaged in a friendly fireside chat on account of a stray remark. A felt or imagined affinity proved to be entirely illusory.

My point is that language, by its very nature as a cultural phenomenon, tends to create illusions of affinity. Its nature is not so much to reveal as to conceal or paper over the very real differences in how speakers see the world.

Language, of course, is a powerful tool for doing what it does best. It facilitates thought and interaction and so makes other aspects of human culture possible. But it doesn’t operate in the way we naively think it does or deliver exactly what it seems to deliver. What linguistic communication doesn’t do is convey thoughts in a literal or even (I suggest) in a metaphorical sense — though we often fool ourselves into thinking that it does.

Let me explain. I have a thought or feeling. I try to put it into words. If it is a thought which draws not only on my commonsense or technical or scientific knowledge of the world but also on my personal memories, values or judgements then it is private to me: it cannot be encapsulated in a string of words and so cannot be conveyed into someone else’s brain. The words, the sentences are in some sense conveyed, sure, but the crucial point is that the same words and phrases trigger different thoughts and feelings in different brains.

The basic building blocks of a language are phonemes (minimal, meaning-distinguishing sounds). Every language has its own set of phonemes from which words and phrases and sentences are built. In any given language the abstract forms of words (lexemes or word stems) constitute a dynamic lexicon which is involved in complex combinatorial processes involving the formation of actual words and sentences (processes studied under the rubrics of morphology and syntax).

At least a rough distinction can be made between language-specific brain areas and non-language-specific brain areas. Phonological and morphosyntactic processes operate largely unconsciously in specific areas of the brain and are relatively self-contained. At a certain level of abstraction, these processes can be explicitly described and modelled (i.e. formalized). But semantic processing spreads a much wider net in terms of the brain areas involved and is not formalizable. It draws not only on language-specific brain areas but also on non-language-specific areas, including those dealing with emotions, memories of lived experience, general reasoning and problem-solving, etc.. This point has been dramatically illustrated in fMRI studies.

The key point here is that semantic processing — unlike phonological and morphosyntactic processing — draws on general aspects of thought, including memories and patterns of emotional response which are necessarily unique to the individual. The complex phonological and morphosyntactic processes which give human language its remarkable power are, in an essential sense, shared. These processes only arise and sustain themselves over time within a shared social and cultural milieu. They represent something that speakers of a given language have in common. The semantic aspect of language also involves shared understandings of course, but the sharing involved here is of a much more limited nature.

Take a simple noun like “dog”. It has a publicly-agreed primary meaning or denotation and various other (also publicly-agreed) senses. But even when used in its primary sense to refer to a particular kind of animal, the thoughts and feelings and memories triggered in my mind by this word will (because of our different personal histories) be very different from the thoughts and feelings and memories which the word triggers in your mind. And if this is so with such a mundane word as “dog”, how much more divergent will thoughts triggered by more value-laden or abstract expressions be?

Such differences relate to (but cannot be completely reduced to) the concept of connotation. Connotations are, like denotations, usually publicly shared (at least to some extent) but, being tied to emotion and personal memories, they are necessarily more vague and difficult to define. Certain words are universally accepted as having a negative connotation (like the noun “lackey”, for example, or the adjective “sanctimonious”) but in many cases the connotation is not obvious or easy to describe and different speakers may report diametrically opposed attitudes.

Though the brain processes underlying linguistic processing remain obscure, the basic elements and patterns of a language — apart from the intricacies of semantics — can be discovered and written down. Linguists have been doing this sort of thing for hundreds of years. In principle at least, anyone with access to a dictionary and a grammar can attain technical proficiency in a new language. All of this suggests that the core mechanisms of language are neither deep nor private in any meaningful sense.

Our personal thoughts, by contrast, remain essentially private, however much we attempt to put them into words or express them in other ways. Literary and other artistic creations are notoriously subject to multiple, divergent interpretations. Gestures can be subtle and expressive but are limited in scope and range.

In some ways, the idea that our personal thoughts remain forever private is confronting. But there is also comfort to be found here, especially in an age awash with spin and propaganda and dominated by social media. For there is, after all, a sacrosanct space where judgements are made or withheld, and where integrity and truth still matter and cannot be threatened.

In fact, the idea I am trying to convey here (an idea, note, not a thought!) can be extremely valuable in helping to minimize frustration and stress.

And lowering our expectations concerning what language can reasonably be expected to do will also (somewhat paradoxically perhaps) make us better communicators.

Tuesday, May 7, 2024

Trying out Substack

Here is the link to my latest Substack piece:


If this Substack site (called Parsing the Parade) gets some traction I may put my main focus there and put my other sites on the back burner. At this stage, however, it is just an experiment.

Submit your email address and new posts (maybe one every few weeks?) will be sent to your email inbox. It's free, and it's easy to unsubscribe at any time.

Here is the general link to Parsing the Parade:

Sunday, February 18, 2024

L.L. Zamenhof and Zionism

The other day, walking through a small park in the district of Pietà on my way to Valletta, I was surprised to see a bust of L.L. Zamenhof, the creator of the international auxiliary language known as Esperanto. My first thought was, I didn't know he had a connection with Malta. And, as it happens, he didn't!

Zamenhof was born in what was then the Russian Empire and spent most of his life in the city of Warsaw. For reasons I have yet to fathom (but which probably relate to Malta's somewhat fraught linguistic ecology) Zamenhof's ideas took root here and some five decades ago the local Esperanto Society saw fit to devote funds to the creation of a public monument.

It's not a great work of art and the awkwardly-truncated arms are a little distracting. But this memorial is not bad as such projects go, and certainly a good deal less ugly than many of the official commemorative sculptures and monuments I have seen on this island.

Despite my lack of interest in Esperanto (the very notion of a constructed international auxiliary language is ill-conceived, in my opinion), I quite like the monument. Its scale and proportions are perfect and there is no nonsense or pomposity about it.

Zamenhof was a physician by profession, specialising in ophthalmology, and not an academic linguist. His main linguistic project was inspired by the naive belief that, if the peoples of the world shared a common language, peace would reign. Basically Zamenhof was a religious rather than a political thinker; his social philosophy was based on Rabbinic Judaism, specifically on the ideas of Hillel the Elder and his school.

Responding to the rise of violence against Jews within the Russian Empire which followed the assassination of the Tsar (Alexander II) in 1881, Zamenhof became involved with proto-Zionist groups, founding the Warsaw chapter of Hibbat Zion. He soon had doubts, however, and withdrew from the movement.

Zamenhof was convinced that Zionism, as he saw it developing in the later years of the 19th century and into the 20th, was fatally flawed and would not serve the true interests of the Jewish people.

In a work published in Russian in 1901, Zamenhof gave three reasons why Zionism was unrealizable: "firstly, because the Hebrew language is not alive, and if the Jewish religion did not exist, it would have died a long time ago; secondly, Zionism is wrong in its conception of Jewish nationalistic feeling: the Jews of various countries have no common ground apart from the religious one; thirdly, Palestine is too small – it will contain approximately only two million – so the whole Jewish question will not be solved."

Note his emphasis on the Jewish religion as the key driver of Jewish identity. This makes sense to me. His views on nationalism, on the other hand, I have reservations about.

In 1914 he wrote: "I am deeply convinced that all nationalism represents only the greatest misfortune for humanity, and that the aim of all people should be: to create a harmonious humanity. It is true that the nationalism of oppressed nations – as a natural self-defense reaction – is much more forgivable than the nationalism of oppressing nations; but, if the nationalism of the strong is ignoble, the nationalism of the weak is imprudent; both give birth to and support each other, and present a vicious circle of misfortunes, from which mankind will never emerge, if each of us will not sacrifice his group self-love and will not try to stand on completely neutral ground."

There is real insight here; the logic is consistent and, within limits, compelling. The problem, as I see it, is with Zamenhof's assumptions: his Enlightenment-inspired, "blank slate" view of human nature; and his implicit conflation of nation and nation-state.

Zamenhof ignores the fact that "group self-love" is a perennial human reality. Certainly it can get out of hand and generate xenophobia and violence, but it also plays a positive – in fact an essential – role in encouraging cooperative behaviour within groups.

The Zionist movement understood this and rejected Zamenhof's dogmatic and naive internationalism. So far, so good.

What the Zionists didn't grasp, however, is that combining their views on the importance of group identity with a perspective on nationhood shaped by Romantic political myths would only lead to trouble. Given the complexities of ethnic and cultural divides, seeing the nation-state as a universal solution, as the only way to satisfy ethnically-based yearnings and feelings of group identity is both confused and dangerous.

Such an approach leads inevitably to "a vicious circle of misfortunes" (as Zamenhof put it), to an unending cycle of conflict and violence.

Language policies in Malta

I am posting here the language-related paragraphs of a short piece entitled "Maltese culture and language" which appeared earlier this month at Conservative Tendency and also on my WordPress site:

Maltese is a very unusual language. Its grammatical structure and morphology derive from an old form of Arabic (Siculo-Arabic) while much of its lexicon derives from Italian and other European languages (including English). Since independence in 1964, the Maltese language has been strongly promoted and supported by the government and official bodies (with a bit of help from the European Union since 2004).

In general, I am not a supporter of keeping languages alive via legislation and regulation. Language change and death is a natural process and individuals should as far as possible be free to choose what language or languages they want to speak and what language or languages their children should speak and be educated in. I recognize, however, that language policies of one kind or another are necessary in multilingual jurisdictions and decisions must be made. The way I see it, something is gained and something is lost either way when it comes to a choice between promoting a local language (or dialect) as against a more widely-spoken and professionally useful one.

As I understand it, the policy during British colonial times was to promote the use of English and standard Italian rather than Maltese. Italian is still spoken, though it is less prevalent than it was.

English remains an official language and is taught in schools but proficiency varies greatly and most locals (including young professionals) are more comfortable speaking Maltese than English. The situation is slowly changing however. Survey results indicate that Maltese under-20s are more likely to favour English and identify English as their first language than other age groups.

Sunday, February 4, 2024

Shorelines

There is something universal about seas and oceans and even standing on the shore you can sense it. Last year on a beach on a Greek island I was vividly reminded of childhood holidays by the sea on the other side of the world. More recently I have been living in the district of Msida in Malta and have been spending a lot of time wandering the shorelines of Ta' Xbiex, Gżira and Sliema. Again, that sense of familiarity  despite the unfamiliar (primary) language and culture of the island.

Over time inhabited lands are necessarily designated, defined and shaped by cultures of one kind or another. Seas and oceans have names and are often affected (negatively!) by human activities but generally speaking they are not marked or "owned" by particular cultures.

There was once a culture of seafarers which incorporated not only practical and technical knowledge and norms of behaviour  as today's somewhat attenuated version still does  but also folk wisdom and fanciful myths. This culture was essentially reactive: it was a coping culture, not a building culture. It was also essentially universal in that  for the most part at any rate  it transcended national and local preoccupations. (The life and works of the Polish sea captain-turned-writer, Joseph Conrad, provide ample evidence to support this last claim, I think.)

As I see it, then, the shoreline's fascination is enhanced by the fact that it lies between two worlds: the culturally-bounded and the culturally-unbounded (or universal).

Is some kind of deep memory at work here also? It wouldn't be surprising, given the central role that seaside (and lakeside) environments played in shaping the evolutionary development of our species.

Going much further back, rock pools, warmed by the sun, constituted a crucially important environment for the development of some of the earliest lifeforms on this planet. The first photosynthesizing organisms (cyanobacteria) appeared about 2.7 billion years ago in such environments.

This takes us well beyond any plausible "deep memory" hypothesis, of course, but a fascination with rock pools, and intertidal zones more generally, needs no such explanation. Factors such as natural  and perhaps intellectual  curiosity are at play here.

My own aesthetic preference for rocky (as distinct from sandy) shorelines is easily explained in such terms  in terms, that is, of general cast of mind, combined (in my case) with extensive childhood exposure to such environments and memories thereof.

Sunday, December 18, 2022

AI, work and human dignity

Speculations about the impact of AI and imagined technological utopias or dystopias necessarily draw on – and reveal a lot about – our fundamental assumptions about human nature. Robert Gressis recently wrote a piece on these themes.

Though his approach is open and undogmatic, his basically metaphysical (and indeed Kantian) assumptions show through. In my opinion, they are counterproductive and create unnecessary problems and confusions.

“I tell myself,” he writes, “that we are not mere playthings of nature, but are instead rational beings who can and should conduct themselves in a certain way, lest we dishonor our dignity.”

Our dignity lies, as he sees it, “in rising above nature.” This just doesn’t make sense to me.

Nor does any notion of “free will” which goes beyond the ordinary (and legal) sense of acting freely (i.e. being of sound mind and not being coerced).

What’s more, ideas like “rising above nature” – and the (originally religious) notion of free will – are quite unnecessary. In fact, I would go so far as to say that only in their absence can we maintain a robust and reasonable conception of human dignity.

The only dignity that counts – or indeed makes sense – is that which is exemplified in behaviour. It relates to how we conduct ourselves (given all the constraints etc. which inevitably apply in specific situations).

Do we behave like egomaniacs or spoilt brats? Or do we apply a modicum of intelligence to our activities, exercising appropriate restraint, self-discipline etc.? Are we sensitive to the needs of others? Are we responsible and trustworthy? These are the sorts of factors which determine whether or not human dignity is being exemplified.

And – significantly – AI does not challenge us in these sorts of matters. Morality and other value-related matters are distinctively human – and will remain so.

Gressis makes a comparison – and contrast – between between future redundant humans and pets.

“[...] I think the utopia-worriers—the people who fear that an AI-fueled paradise will be unsatisfying—are fearful because they think it should be unsatisfying. But should it be unsatisfying? Pets have guided my thinking on this question. I look at my cat, and I joke, “you get paid way too much.” The point of the joke is that I’m expecting more from my cat than he can give. Sure, he’s cute and I like petting him, but he doesn’t do anything useful, like killing bugs. Instead, he just lies around, gets some scritches, and licks his genitals.”

The analogy is amusing. But the crucial point here, I think, is that pets are quite different from us. They don’t have our range of freedom. They are more hard-wired than we are. And, of course, they don’t have language.

Gressis writes: “If the AI-optimists are right ([…] big if, but it doesn’t seem impossible), then there will come a time when humans will be as useful as pets. Our use-value will consist almost entirely in our ability to entertain each other.”

Not just to entertain but to communicate and interact in multiple ways. To challenge, to love, to annoy, to betray… Again, it’s the moral realm (broadly conceived) that counts – and always will count – for us. And it cannot be usurped by any technology.

AI taking over various jobs is obviously threatening from a financial and psychological point of view for those who earn their living and/or derive their self-esteem from jobs which AI threatens to replace. But this is simply an extension of a familiar pattern which is evident throughout history – at least during periods of rapid technological progress. The only difference now is that it is not just manual and low-level office workers who are being made redundant but also professionals.

I think that Gressis sees AI as being more problematic than I do partly because of his metaphysical presuppositions and partly because he sees work as being more morally and psychologically central and important than I see it to be.

For me work is just another unfortunate necessity; something one has to do to earn a living, support a family, build up savings. Most of my – rather patchy – working life was spent teaching in universities. It was (much of the time, at any rate) reasonably congenial and pleasant. But, even when things were going well, professionally speaking, my sense is that I generally only experienced happiness during idle moments and via not-directly work-related interactions rather than through my actual work or (admittedly modest) professional achievements.