Evolutionary linguistics

How Does Language Work?

Babel's Dawn - Wed, 08/16/2017 - 17:14

These days we expect our sciences to have a practical side. We understand how things work and make use of the knowledge.

Science began as common sense put into theoretical shape by Aristotle. Thus, pretty much every advanced science has begun by showing what common sense missed and Aristotle got wrong. So common sense says the sun revolves around the earth. Then Aristotle developed a theory of physics that took common sense observations for granted. Aristotle’s physics, however, was purely theoretical without practical benefit.

Copernicus, Galileo and Newton overturned that common sense and introduced a more modern physics. The proof of the new science was that it led to practical applications, first in mechanics and later in space travel.

At the time of Galileo, Rene Descartes was also introducing a new theory of physics, one that relied solely on logical hypotheses and deduction. Although widely admired at the time, this work has not held up. For one thing, it did not address the common sense of earlier ages, for another it led to no practical or explanatory work.

Sixty years ago the study of language grew radical without addressing common sense or Aristotle. The common-sense proposition was that language is meaningful, and the Aristotelean theory was that language works by combining sounds with meaning. Reasonable as this definition sounds, nobody ever figured out how to use it and the practical traditions of rhetoric and composition pay no attention to Aristotle.

The linguistics’ movement of the late 1950s also ignored Aristotle and common sense. It pursued questions based on the logical hypothesis that language is a computation. Interestingly, the movement was led by a young thinker whose great hero was Descartes, and like Descartes, the movement’s work has led to no practical or explanatory success. It answers none of the traditional questions about language—e.g., Why are there so many and how can they be so different? What is meaning? How could it have begun? –and offers no practical clues to using language more effectively, or translating texts, or improving speech therapy, or overcoming dyslexia.

The problem seems to lie at the assumption that sentences are computations. On its own, the idea has some plausibility. If the brain is a computer, its output must be a computation. In computations, however, the same input produces the same result. In language, the result is not so predictable. If I participate in a soccer game and must report what just happened, I might say I kicked the ball or I sent the ball flying or The ball really jumped off my toe or I missed the goal or Joe was racing for the ball but I beat him to it or … and on and on ad infinitum.

This observation brings us back to meaning. Our utterances depend on what we have to say and language seems to communicate meaning. Could Aristotle have been right after all?

No. The proposition that language combines sound with meaning cannot be correct. The problem is that meaning is not a physical thing that we can somehow combine with sound waves. It is a ghost that Aristotle inserted into language back when inserting ghosts was no vice. He also inserting yearning into his list of elements: fire yearned to be high in the sky and rose toward the sun; earth yearned to go to the center of the world, so earthen matter fell and even accelerated as it approached its goal.

Kicking out the ghosts of physics was not easy because the things that Aristotle explained still needed explaining. The solution lay in saying that the rising smoke and falling meteors are effects of gravity.

My work on this blog has likewise persuaded me that meaning is an effect, rather than a cause.

The simplest example might be two people standing together when one of them points toward something. The other looks over and sees a policeman beating a man. The gesture directed the other’s attention. The meaning of the gesture came when the second person redirected attention and saw something new.

Suppose instead, one person tells another, “I saw a cop beating up a guy today.” The meaning is discovered by the same general principle of directing attention, the difference being that instead of directing a person’s eyes, the speaker directs the listener’s imagination. In both cases, the meaning is the result of the directed attention.

This reversal of meaning changes the task of speaker/writer. Instead of focusing on inserting meanings, the task to skillful language production lies in producing sentences that the audience can follow. How do we do that? By paying attention to the demands we place on the listeners’ attention.

The old man the boat. Oh, I’m sorry, did I lose you? It is not surprising. A reader first takes “The old man” as a noun phrase and needs a second look to grasp that “man” is a verb. This kind of sentence, known as a garden-path, is well known in linguistics and is strong evidence that listeners construct meaning as they go along. If they go astray, they must retrace their route, looking for the point where they got lost.

The old suffer many indignities. I hope that sentence was easier to follow. Why was it so? Because readers know to shift their attention from the old to suffer. This sentences helps the reader by making it easy to shift attention.

I have published a few papers on line (here and here) demonstrating that syntax directs attention, and that oddities proposed to illustrate a universal grammar can be readily explained as devices for directing attention.

I have been a decent writer for many years, but I am a better one now because I understand how to help readers make their way through complex sentences. So there has been a practical benefit to my years of wrestling with how language works. At last, rhetoric may be given a clear, theoretical footing.

How Old is Speech?

Babel's Dawn - Wed, 08/02/2017 - 20:40

This blog takes the position that language, in the sense of two or more people focusing together on a topic, is quite old. Archaeologists, Chomskyites and others tend to put it as a more recent in the human lineage, about 100 thousand or fewer years. I put it at approaching 2 million years. My main grounds for thinking such is based on cooperativeness and the idea that it took a long time to create the verbal environment that we now take for granted.

Slow evolution

I noticed an article from a couple of weeks back about the “truly” bilingual child, and I came across this passage, “Pediatricians routinely advise parents to talk as much as possible to their young children, to read to them and sing to them. Part of the point is to increase their language exposure, a major concern even for children growing up with only one language.”

It is a familiar sentiment, but it sparked me to think about the days when language was really new. At first people probably did not have too much to say to one another; talking was an occasional thing, and even today verbal richness is impaired if we are not surrounded by words. When language was new our ancestors could talk, but they were still linguistically impoverished when compared to today’s oral cultures. Their children did not grow up hearing a ceaseless yakety-yak and did not create a rich verbal environment themselves.

We can assume that language was first used to relate news of the here and now: there is a carcass we can scavenge yonder; I just saw a lion; your mother is down at the creek. News of this type is not going to produce chatterboxes. For that you need narratives, strings of two or more sentences: (1) there is a carcass we can scavenge yonder; (2) bring some cutting stones.

It seems unlikely that early talkers went straight to sentences. The pattern we see in children is probably a quick-time recapitulation of the developmental process—words, phrases, basic sentences; richer sentences; strings of sentences. The jump from words to phrases probably came quickly as a few captive bonobos have managed to join words meaningfully in sign language. I once heard a toddler use a phrase on her first birthday. I was inclined to attribute it to the excitement of a birthday party, but she quickly made phrases a regular part of her speech. Sentences, however, were another matter.

When we imagine early talkers—say, Homo erectus and precursors—we ought to think of their language like their tools, simple but persistently part of their lives. And we should try to imagine it staying that simple for perhaps a million years while their brain grew large enough to handle the load.

Full, transitive sentences join two things with an action, e.g., the zebra kicked the lion. Children use a few verbs right away—eat cookie; want juice—but most verbs are late in arriving. Some extra maturation of the brain appears to be required for a person to unite two things through a single action. Simply perceiving what happened requires a feat of attention that may be beyond a two-year-old. Anybody who has watched an unfamiliar sport knows how difficult it is to perceive just what happens in complex, unexpected actions.

Transitive verbs allow for mythological and abstract thinking. Abstract ideas like not fair are probably very old, but the idea of making something fair—as in I will weigh my mischief in the balance with three days labor—requires a very difficult concept. The verb weigh…in the balance is a metaphor that somehow compares apples (my mischief) and oranges (three days labor). We take for granted blind justice holding up scales, but the original person who spoke of such things was a first-class poet.

By 100,000 years ago, sentences, narratives, abstractions and metaphors were probably all there for the chatterboxes to drone on about, and to leave the archaeological clues that indicate cultures steeped in symbolism. But symbols did not spring fully ripened from the first talkers’ tongues.


The other line of reasoning that brings me to the same conclusion is Homo's hyper-sociality. The African savanna promotes togetherness. The grass eaters form herds and the predators hunt in groups. Loners like rhinoceroses and bull elephants need to be huge so the predators cannot harm them. With the savanna's emergence a few million years ago the already social primates that stayed on the plain had to become even more dependent on one another. What emerged from the process was a terrifying new species able to stand up to the predators and bring down the herders. The only way this success was possible was by regular cooperation and sharing.

Going back as far as Homo habilis we know that individuals taught other individuals how to make tools. The same tools turn up in many sites even thousands of miles apart and persisted unchanged for hundreds of thousands of years. It seems likely that the teaching relied more on demonstration than on telling, although words may have played a part.

Cooperation is not the first solution Darwinian processes attempt and most living organisms depend on themselves, but super-cooperative species like eusocial insects prosper because they share information. When cooperative sharing appears evolution has found a trick that pays off. The Homo lineage has probably been pointing and demonstrating since the beginning, meaning we have been motivated to help one another for almost two million years. Work with apes has already established that our ancestors had the brains to use words. If we combine the presence of brains and motivation, it seems strange to insist that words did not come for the first 1.7 million years. Indeed, I doubt anybody who insists language must be new. If they want to persuade me, find some evidence that cooperation is new, or that a properly motivated ape will have the tools to tell me a story.

Usage context and overspecification

A replicated typo - Wed, 07/26/2017 - 22:57

A new issue of the Journal of Language Evolution has just appeared, including a paper by Peeter Tinits, Jonas Nölle, and myself on the influence of usage context on the emergence of overspecification. (It has actually been published online already a couple of weeks ago, and an earlier version of it was included in last year’s Evolang proceedings.) Some of the volunteers who participated in our experiment have actually been recruited via Replicated Typo – thanks to everyone who helped us out! Without you, this study wouldn’t have been possible.

I hope that I’ll find time to write a bit more about this paper in the near future, especially about its development, which might itself qualify as an interesting example of cultural evolution. Even though the paper just reports on a tiny experimental case study, adressing a fairly specific phenomenon, we discovered, in the process of writing, that each of the three authors had quite different ideas of how language works, which made the write-up process much more challenging than expected (but arguably also more interesting).

For now, however, I’ll just link to the paper and quote our abstract:

This article investigates the influence of contextual pressures on the evolution of overspecification, i.e. the degree to which communicatively irrelevant meaning dimensions are specified, in an iterated learning setup. To this end, we combine two lines of research: In artificial language learning studies, it has been shown that (miniature) languages adapt to their contexts of use. In experimental pragmatics, it has been shown that referential overspecification in natural language is more likely to occur in contexts in which the communicatively relevant feature dimensions are harder to discern. We test whether similar functional pressures can promote the cumulative growth of referential overspecification in iterated artificial language learning. Participants were trained on an artificial language which they then used to refer to objects. The output of each participant was used as input for the next participant. The initial language was designed such that it did not show any overspecification, but it allowed for overspecification to emerge in 16 out of 32 usage contexts. Between conditions, we manipulated the referential context in which the target items appear, so that the relative visuospatial complexity of the scene would make the communicatively relevant feature dimensions more difficult to discern in one of them. The artificial languages became overspecified more quickly and to a significantly higher degree in this condition, indicating that the trend toward overspecification was stronger in these contexts, as suggested by experimental pragmatics research. These results add further support to the hypothesis that linguistic conventions can be partly determined by usage context and shows that experimental pragmatics can be fruitfully combined with artificial language learning to offer valuable insights into the mechanisms involved in the evolution of linguistic phenomena.

In addition to our article, there’s also a number of other papers in the new JoLE issue that are well worth a read, including another Iterated Learning paper by Clay Beckner, Janet Pierrehumbert, and Jennifer Hay, who have conducted a follow-up on the seminal Kirby, Cornish & Smith (2008) study. Apart from presenting highly relevant findings, they also make some very interesting methodological points.

Language Among the Topsy-Turvy

Babel's Dawn - Tue, 07/18/2017 - 00:23

In the last post I commented on the paper “Wild Voices” by Chris Knight and Jerome Lewis in Current Anthropology. The article focuses on the social changes that were required to make language possible. The changes should be generally familiar to regulars on this blog.

The main one is the switch from a society based on dominance and submission to a community held together by trust and a willingness to cooperate.

These behavioral changes have been accompanied by several biological changes as well. One, mentioned before on this blog, is the switch from black to white eyes that make it easy to see where one’s attention is focused. A couple of important reflexive changes have occurred as well. For example, apes respond to threats from others with a reflexive “fear grin” that indicates a nervous submission. That reflex has been transformed into the human smile which signals a relaxed good-humor in friendly company.

And laughter provides a weird combination of friendliness and aggression. An example of that not mentioned in the paper is the late night TV anti-Trump satire that bonds the laughing audience while humiliating its target.  

The authors speak of a “principle of reversal,” i.e., a series of steps that result in a reversal of the old ape standard to something new. The change of the grin to a smile, turned a signal of submissive fear into one of confident trust.

Other reversals saw mothers who never let anyone else touch their infant become mothers who let many others help with the care and even delivery of infants.

Another reversal necessary for people using modern languages is the signaling of non-physical facts through ritual.  A wedding ritual, for example, changes the way the entire community understands the relationship between the marrying people. In many contemporary societies this ritual includes vows to love one another, so that language is part of the ritual. And many groups include verbal prayers in their rituals, but more is claimed for the ritual than physical actions. Identities and spiritual natures are said to change.

Once introduced, these changes cannot be undone. A shift from black eyes to white eyes is one small shift, but as part of a series of changes that cannot be taken back. something novel and lasting appears.

A particularly important change was the new relationship between males and females. Studies of animal behavior typically find the males are dangerous and irresponsible. Male mammals fight for the right to spread their seed and then leave the females to raise any offspring. Particularly bad actors kill rival offspring and mate with the grieving mothers. Somehow humans have developed an enormous variety of cultures in which men help raise the children and keep the brawling over women to a minimum.

These changes combine to create a species that is motivated to help one another when trouble strikes, is routinely cooperative, and engages in a series of rituals and actions that cement trust. It might sound as though the authors have strayed pretty far afield from the question of how language emerged in human history, but the their point is that without trust it would be a foolish for speakers to risk revealing what they think, and it would be equally foolish for listeners to believe what they are told.

Trust is not easily found and maintained. It requires simple signals like smiles, bonding like shared laughter, and a series of reassuring ceremonies and actions, 

This need for a trusting, helpful and cooperative species stands, no matter how you think language arose. Even if you accept Chomsky’s idea that language began as a way of thinking, it could only be externalized and become a means of communication once trust was established.

Hey Interesting Topic, What’s Your Name?

Babel's Dawn - Mon, 07/10/2017 - 22:53

I want to propose the embrace of an ugly word: logogenology (low-go-jen-ahl-oh-gee). It comes from three Greek words, logos [word], gennesi [birth], and logia [study of], and it names the study of language origins. In other words, it refers to this blog’s beat.

Normally I dislike academic coinages, but in this case I think we need to recognize that there is a community of scholars who began in many fields—e.g., linguistics, literature, biology, psychology, archaeology, and anthropology—who share common questions and are interested in one another’s results. Thus a biologist might learn from a linguist and come to a conclusion that is of more interest to that biologist than to most linguists. Instead of identifying themselves as biologists and linguists, it might be better to focus on their shared community and say , “I’m a logogenologist,” even if one has to add, “That’s somebody who studies language origins.”

I have come to this position after reading an interesting paper by two people calling themselves anthropologists, Chris Knight and Jerome Lewis. The paper is titled “Wild Voices” and is published in Current Anthropology. They begin their essay, “Anthropology is the study of what it means to be human. So it must be at least part of our job to explain why it is that out of 220 primate species, only humans talk.” The authors seem to be claiming that explaining speech is a part of anthropology, but they concede immediately that their account of language origins requires taking the work from many other fields of study.

The third paragraph says: “A word of warning. The way we have constructed this article is novel, and we ask the reader not to be surprised that we conjoin a wide range of previously unconnected fields. Our basic idea is simple: using language is so closely bound up with everything else humans do—singing, ritual, kinship, economics, and religion—that no separate, isolable theory of its origins is likely to work.” While the authors seem to be writing for anthropologists, they acknowledge that their data comes from many other fields.

Members of the language-origins community will find nothing startling in the connections the authors make. So why not just admit that there is a community of scholars who use data originally developed in a variety of other fields to answer questions that are peculiar to the new community? The main logogenological question is how did language begin, and there are a variety of sub-questions too such as when did it begin, what bodily and cognitive changes were required, how did it become universal to the species, etc. The first section heading in the Knight/Lewis paper poses a common sub-question of the field, “Why Do Only Humans Talk?”

The authors give a shockingly brief answer: “Since language is not a system for navigating within the physical or biological world, it follows that nonhuman primates—creatures whose existence is confined to the realm of brute facts, not institutional ones—will have no need for either words or grammar.”

What? Where did that premise come from? It seems to be based on an anthropological dictum that “words and grammar are means of navigating within a shared virtual world.” Here we see the circular trap that comes from acting as though one of logogenology’s contributory fields is able to answer logogenological questions. Anthropology is the study of the various virtual worlds (cultures and institutions) created by humanity. Thus, the element of language that interests anthropologists is how language helps members of a group navigate that virtual world. This foundation forces the answer to at least two sub-questions: (1) why do only humans talk? Other animals have no need for speech, and (2) when did speech begin? After humans had begun to create a virtual world rich enough to require help in navigating it.

Knight and Lewis might respond that it just happens that anthropology alone is sufficient to answer these questions. But Chomskyan linguists offer different answers.  (1) Only humans talk because they alone are able to organize words according to a recursive syntax, and (2) speech began after a number of humans had developed the ability to think using that recursive syntax. The result of these rival answers is that anthropologists and Chomskyans quarrel a great deal and the work of science—drawing conclusions from empirical data—bogs down. Indeed, the claim to be a science looks laughable.

Lets come at the questions from a logogenological perspective. (1) Why do only humans talk? The abstract answer is short enough: only the human lineage went through the series of evolutionary changes necessary to make language possible. What were those concrete changes? That is for logogenologists to determine. Anthropologists and Chomskyans both, if they want to work out these changes, must leave their field of training and work as members of the field studying language origins. (2) When did language begin? Before answering that we have to draw up a list of changes necessary for speech to be possible and discover when each of them appeared. The result will be a series of empirically validated answers, not a list of deductions based on a field’s a priori definitions.

The Knight/Lewis paper asks logogenological questions and takes its data from many fields but then tries to fit the answers into anthropology-shaped boxes. The authors need to recognize that they are no longer working as anthropologists and come at their conclusions from the same direction they asked their questions.

I am going to post a second report on the Knight/Lewis paper in a few days.

MMIEL Summer School in experimental and statistical methods

A replicated typo - Thu, 06/22/2017 - 15:09
September Tutorial in Empiricism: Practical Help for Experimental Novices

In September, the Language Evolution and Interaction Scholars of Nijmegen (LEvInSoN group), based in the Language and Cognition Department at the Max Planck Institute for Psycholinguistics will be hosting a workshop about research in Language Evolution and Interaction (September 21-22) – call for posters here: http://www.mpi.nl/events/MMIEL

As an addition to this workshop, we will be hosting a short tutorial series bookending the workshop (Sept 20 & 23) covering experimental and statistical methods that should be of broad interest to a general audience. In this tutorial series, we will cover all aspects of creating, hosting, and analysing the data from a set of experiments that will be run live (online) during the workshop.

Details of the summer school can be found here: http://www.mpi.nl/events/MMIEL/summer-school


Registration is free, but required. Spots are limited and come on a first come first served basis, and a waitlist will be established if necessary.

Register here

Syndicate content