“The asymmetry in the brain that led to language is unlikely to have appeared because of language … it is likely to have appeared for some other reason, and basically got co-opted by language.” — Christopher Walsh
I have been reading The Master and His Emissary by Iain McGilchrist. The book explores the two hemispheres of the brain, right and left, and develops a theory based on an asymmetry between the two hemispheres, that essentially argues that the left-hemisphere can over-categorise and over-simplify the world, and can shut up the broader, synthesising right hemisphere while it does so.
The book is to be applauded for its relatively disciplined and rigorous approach to the subject matter, which has been used and abused by all manner of people. The difference between the hemispheres has been used to flog any number of terrible self-help guides, with varying advice for people who are left-brained, right-brained, upside-down brained or not-yet-potty-trained.
It’s also very dense. It clocks in at ~450 pages, with a lengthy bibliography and notes - McGilchrist is found of a long excursus in 6-point type. There are a number of ideas, all of which are marshalled to support the thesis that I ineffectively summarised above. It’s impressive, and it’s hard not to be overawed by the scholarship. McGilchrist was an English professor before retraining in medicine and neuroimaging, and brings hefty volumes of philosophy and literature into his work.
As someone who studied history at undergraduate level, it is nice to see philosophy being deployed in science effectively; I recall a lecture during my neuroscience masters where lecturer essentially tried to explain the problem of induction without reference to Hume, or philosophy. Sometimes the scientists need to reach across the Two-Cultures-Chasm as well.
I thought perhaps the best way to approach this book is to break it down into sub-essays, each delineating and picking off an area that McGilchrist discusses. People like to say that his overall thesis is overblown, but it seems like a valuable task to break it down and consider what is worth keeping and what should be thrown away.
I’m trying to work by the rule that the topic areas should be interesting in their own right, regardless of whether they contribute to his theory. This blog isn’t so much interested in a big McGilchrist panegyric-encomium-dithyramb (choose your fighter) as discussing things that interest me personally and that should be wider known.
So, without further ado. Language. Where did it come from?
IN WHICH I WADE DEEP INTO WATERS THAT ARE MUCH TOO DEEP
An unambitious start, I hear you cry. Just the origin of language? Pah. I worked that out when I was a wee bairn, drinking nothing but goat’s milk with one hand and strangling wolves with the other, while teaching myself to write in ancient Greek using cow dung.
So I wasn’t really planning to discuss the entirety of the origins of language, but there’s an interesting argument that says, in essence: Language originates from hand gestures.
There are basically three theories for language origins:
Gestures were used as a language system, and then were gradually converted into language.
Vocalising or protolanguage was used for language, and gestures were gradually integrated into this.
Both gestures and vocalising developed in tandem.
Number 1, which we’ll start with, apparently goes back to the philosopher Étienne Bonnot de Condillac, who I’ve never heard of, but according to the more temporally recent Michael Corballis and Adam Kendon, there are a number of reasons to believe that when it comes to language, gesture came first:
Great apes use gestures a lot. They’re easy to learn, evolutionarily speaking.
Great apes, with some difficulty, can be taught to use hand gestures derived from sign language in symbolic ways.
Young children point to communicate before they speak to communicate. Babies often point and babble together though, and it kind of seems like they’re trying to use both to get their point across.
Sign language exists. You can have language without speech, but it is difficult to imagine language without gesture. Gesture is said to be as much as 90% of communication between people (when they are using words!).
Speech is accompanied by complex manual gestures, suggesting that it sits on top of a scaffold of gesture.
Mirror neurons activate in primates while they watch other primates grasp things. It is thought that mirror neurons allow for understanding the intentions of others, and this could be integral to language development.
So, how do we find out what came first: the speaken or the gesturegg? Perhaps a more fundamental question has to come first. What defines a language?
Signals, firstly, have to be intentional. It’s not language if you’re not trying to communicate. Even if Deepak Chopra’s twitter is scientifically indistinguishable from bullshit, both Chopra and bullshit are trying to communicate something. Here’s Jacques Prieur:
Intentionality is a fundamental element of most operational definitions of commu-
nication because signaling must be intentional to be considered as communication. Researchers have proposed multiple criteria to characterize intentionality in communication, such as flexibility of signal use, sensitivity to the targeted recipient’s attentional state, and the signaler’s persistence in attempting to achieve its social goal.
Happy? Let’s move on.
Scientists often separate human languages from those of other species via generativity. Human languages have the power to generate new words, morphemes and structures. In order to that, there have to be rules of language, which in turn allow for the production of new structures. And those rules need to be recursive, i.e., to be applied to the output of things that the original generativity produced, with no limits (just like Bradley Cooper in Limitless). Other animals don’t really do generativity. That’s why we’re so special (just like Bradley Cooper in Limitless).
Since generativity is a pretty hard thing to determine, often people see if communication systems fall into certain patterns to see if they are language-like. This is because languages often follow natural laws. This insight stems back to Claude Shannon, who showed that you could use entropy to model communication. For a simple example, if I were to show you this: B______, there are a number of words it could be. But with each additional letter I give you (BO_____) you become more (BOL____) and more (BOLLO__) and more (BOLLOC_) sure about what the word will be, until you finally realise that I’m talking about the French television host, (Yvan Le) BOLLOCH.
It’s possible to use this phenomena much more broadly to map languages, and to see if, for instance, alien species like dolphins or aliens are talking in regular patterns.
Other linguistic laws include Zipf’s Law, named after George Kingsley Zipf, who became a linguist to figure out why his last name was so different from his other two names. Zipf’s law states that the most common word in any language will be used twice as much as the second most common word, and three times as much as the third most common word, and so on. This holds for all languages, including made-up ones like Esperanto or whatever Lisa Gerrard calls her Gladiator theme song language.
Paul Menzerath also has a law named for him, proving the Zipf-Menzerath law which states that to be a linguist your name has to have at least one ‘Z’.
The Menzerath law on its own states that the larger the linguistic construct, the smaller the size of its constituents, and vice versa. For instance, the longer a sentence, the shorter the clauses, or, the longer the word, the shorter the syllables.
Such laws are not binding evidence of ‘language’ in their own right, but if a type of communication follows them, it is evidence that the communication is language-like. Both human gestures and vocal communications follow all of the above laws, which suggests they both have fundamentally language-like properties in their own right. Ape gestures are less generative, although the extent to which they are generative is disputed.
WHAT DID YOU DO FOR THE LAST 4 TO 8 MILLION YEARS?
Let’s look at the timeline for a second. There don’t appear to be any forms of generative language among the apes, so we can probably safely conclude that language develops after hominids split from the apes. This split occurred somewhere between 4 to 8 million years ago (or mya, as the cool anthropologists say). The earliest known hominid is Australopithecus afarensis, or this gal:
She’s bipedal, and she’s called Lucy, after Lucy in the Sky with Diamonds, which happened to be playing on a tape recorder the evening that the skeleton was discovered in 1974. So we missed out on her being called Polythene Pam or Sexy Sadie or Sloopy the Skeleton, but that’s okay.
Being bipedal is critical because suddenly the hands are freed to do things, like play with your genitals sign to your friends about dangers and food in your neighbourhood.
The earliest form of homo (that’s our species name, you child) is Homo habilis, which dates from about 2.2 to 1.6 mya. Then we get Homo erectus (from standing up, you child), and about 1 mya, H. erectus goes on an international tour, hitting up venues across Africa, Europe and China. Then to get anatomically modern humans you add another sapiens to Homo sapiens, so Homo sapiens sapiens (not sure how you’ll interpret this one in an immature way, but I can see you trying, you child).
H. sapiens sapiens emerges in Africa about 200,000 to 150,000 years ago, and migrated out of Africa about 100,000 years ago, displacing everyone else who’d gone before them and getting around to dispatching the nearderthals about 35,000 years ago. Here’s a chart of all of that, stolen wholesale from Michael Corballis:
So somewhere on the right branch of that tree, we get language. To narrow in a bit on when language develops, a good question to ask might be something like, “Gee, what do we need to speak languages to each other?” Thanks for asking! Big brains that can support language and vocal structures in the throat that let you actually say words are both critical.
The australopithecines had ape brains. But from about H. habilis, we get rapid expansion of the brain, and cranial capacity keeps expanding into the time of H. erectus and H. sapiens. Our current brain is way off the chart on the brain-body ratio. It’s about three times larger than it should be for a primate our size. Basically, the prolongation of childhood allowed our brains time to develop, and that in turn allowed the formation of the brain regions that can support language. We get
MULTIMODALITY
How did you originally think language developed? A big hodgepodge of yelling and gestures and lip smacks, all of which aimed to convey the key message that people need to stop touching their genitals in front of you? Well, you were probably right.
This is according to a big review paper from Jacques Prieur et al. who cite this picture of a sassy monkey as evidence (and then also like fifty studies).
They are likely right. Corballis, from earlier, seems insistent that there was a stage of pure gesture, setting himself up against his nemesis, Bickerton, who argues that the antecedent stage to language was a protolanguage. But why wouldn’t it be both?
Language is likely to have multicausal origins. Most communications come with other forms of communication, like when your coworker sends you an email attachment to the mostly blank project that you were supposed to have completed days ago. Or more simply, vocalising, in most forms, comes attached to a bunch of mouth, face, and eye movements. As does gesture. Given their evident integration, it seems strange to suppose that one arrived, and then the other.
Gestures support the vocal use of language as it is used now. They allow speakers to give additional context, that language doesn’t necessarily offer, about the manner of action of a verb, or the location of something being referred to, or its dimensions.
Language comes about because we start hanging out together in tighter knit groups. This produces social hierarchies, which language is useful for communicating about. It reduces the risk of being eaten, which allows for the development of less primal communication systems. But both vocal and gestural communication are likely to operate together synchronously for at least some of our phylogenetic history.
Then, as vocal communication is apparently the most efficient form of communication in terms of energy usage, it gradually outstrips gesture as the supposedly ‘dominant’ form of communication. Gesture can’t be transmitted in the dark, or if people are facing away from you, or distracted.
AND YOUR POINT IS?
Sorry, yes. Let’s go back to McGilchrist.
The origins of language may unravel some of the structure of our thoughts, which are notoriously hard to pin down. In neuroscience, one of the biggest problems with studying the brain is what is known as Leibniz’s Gap - thoughts cannot be seen merely by observing brain properties or events.
McGilchrist raises the interesting point that while language is often considered the medium of thought, there’s no reason that this is necessarily true. Depending on the particular subsection of thoughts, language may play a role in their construction. But this may not be true for all thoughts. And it may not be true at all.
Broadly, however, language could easily be the end of the line in the cognitive construction train. We think, language emerges as the product of a thought process, and there’s a sort of natural assumption that language was thus likely involved in the construction of the thought process. But this is a syllogism.
When I am asked to produce information, as say, in response to a question, what often happens is that an answer rapidly appears in my head, with totally indeterminate origins. I can then think in language, more consciously, about whether that answer is correct. What usually happens is that I grow less confident in the original answer. This is in line with a phenomena in the confidence literature which shows that your confidence in any answer decreases as you think about it more.
The suggestion of Kiani et al. (2014) is that this is just the brain using the time taken to reach an answer as a heuristic for how difficult the question is. I can and do think in language, say when I need to solve a word-related problem. It’s just harder, and slower, and I have less confidence in my answers. And, often, if I do produce a word initially in response to a question, the original word I produce is the correct response. Not always, but at a far higher accuracy you’d expect than for the speed of thought that produces it.
Anecdotally then, language seems to be a top-level process that is ill-equipped to deal with some tasks and well-equipped for others. That’s likely because it arrives late, and then gets slapped on top of some existing network of thought processes.
McGilchrist uses this platform to argue, in the vein of this Norman Geschwind paper, that language is less about communication and more about manipulating and controlling the world. That is certainly how language often feels to me. It can be used for communication, but often lacks nuance and depth. But when it used to catalogue and label, there is no better tool.
That idea - that language is not fundamental, but sits on top of thought processes - seems quite profound. For McGilchrist, this lays down a challenge to the purported dominance of the left-hemisphere, and he builds this into a broader argument about our identity, which we often link to our thoughts. What we think is us, right? But the argument that McGilchrist attempts to make is that our self is actually much lower down the brain stem than we think it is. The top-level thoughts are at the top-level, and they don’t produce selfhood. As Nietzche puts it:
Thoughts are the shadows of our feelings - always darker, emptier, simpler.
So where are the feelings? Are they in the right hemisphere? Are they more fundamental to our identity? I don’t think I’m clear on these questions yet, but I’ll try to write about this soon.