Evolutionary linguistics

MMIEL Summer School in experimental and statistical methods

A replicated typo - Thu, 06/22/2017 - 15:09
September Tutorial in Empiricism: Practical Help for Experimental Novices

In September, the Language Evolution and Interaction Scholars of Nijmegen (LEvInSoN group), based in the Language and Cognition Department at the Max Planck Institute for Psycholinguistics will be hosting a workshop about research in Language Evolution and Interaction (September 21-22) – call for posters here: http://www.mpi.nl/events/MMIEL

As an addition to this workshop, we will be hosting a short tutorial series bookending the workshop (Sept 20 & 23) covering experimental and statistical methods that should be of broad interest to a general audience. In this tutorial series, we will cover all aspects of creating, hosting, and analysing the data from a set of experiments that will be run live (online) during the workshop.

Details of the summer school can be found here: http://www.mpi.nl/events/MMIEL/summer-school

 

Registration is free, but required. Spots are limited and come on a first come first served basis, and a waitlist will be established if necessary.

Register here

Attention and Language

Babel's Dawn - Mon, 06/12/2017 - 04:18

The most important thing I have learned in working on this blog has been the relationship between language and attention. Language, I have concluded, works by sharing and directing attention to a topic. It is really that simple, yet it is rich in implications.

Evolvability

Attention is widespread in the animal world and all primates, certainly all apes, are well endowed with the ability to direct their attention to different points in their environment and stay focused on a task for an undefined length of time. Thus, any special human attention tasks such as joint-attention, interactive attention, etc. that language might demand only call for tweaks of the system, not wholly new mechanisms. Anybody interested in language origins should find this approach to language simplifies the evolutionary puzzles.

Demystify meaning

Meaning has always been a mysterious concept, rather like that of the soul, only meaning is the soul of the word rather than the body. How does meaning get into a word or sentence in the first place. Is it in the speaker’s head? Does the sound carry meaning to the listener’s head? Or is the meaning outside the body altogether?

These questions, which come up when considering thought experiments like the Chinese Room, carry their own alarm bells. Where is the meaning? That question can only make sense if meaning is a thing. We can get rid of the confusion if we say meaning is not a thing but a response. Words pilot attention. All the many mysteries about where meaning is, how it is communicated, what changes it, etc. begin to look ridiculous as we see that all such questions assume that meaning has some kind of presence.

People can be said to understand a language when their attention is directed by the words and sentences of that language. Computers may be able to translate languages perfectly decently, but we can still maintain that they don’t know the meaning of what they are doing because their processing never involved directing attention. Any philosopher of language should appreciate the firmer basis on which to consider meaning. (Personal note: It was my recognition of the demystification that persuaded me to grab the attention idea and see how far I could run with it.)

Grounds language in perception

Attention is a function of perception, so it should not be surprising if language has many of the features of a perception.

  1. Perceptions are always perceptions of something and language is always about something.
  2. Perceptions always have a point of view and speech does too.
  3. Perceptions organize sensations into a foreground and background, and language can do the same. The foreground of an utterance is the focal point of attention. For example, if a person focuses auditory attention on a honking goose while only being vaguely aware of other sounds, a speaker can restrict an utterance to the focal point—A goose honked angrily—or include background details—A goose honked angrily over the hens’ clucking sounds.

There is enormous room for exploration here and this grounding in perception should provide much fodder for critics, gestalt psychologists, and psychologists of the newer, embodied-mind school.

Explains syntactic structure

Perception redirects attention and syntax works by controlling shifts in the listener’s attention.

I argue this case in detail elsewhere and am confident that attention based syntax can explain even the strongest observations made in favor of a Universal Grammar and it has the extra benefit of making sense. Syntactic structure reflects the limits of attention and memory and is not merely an arbitrary set of rules. Linguists with an interest in syntax should appreciate the approach and composition teachers should like the way it provides students with a way to use grammar as a help rather than a stumbling block to clear writing.

(See Attention-Based Syntax and Reflexive Anaphora in Attention-Based Syntax.)

Learnability

The fact that children master speech so easily has long been a mystery. Is it inborn or learned? It turns out the innate part comes from our ability to attend, to shift attention, and to remember. Anybody interested in children’s acquisition of language should find that the approach simplifies the task to be explained.

The greatest objection to this approach is likely to be that it depends on conscious rather than mechanical or computational processes. Attempts to model attention on computers generally treat attention as a passive filter of input, whereas attention here is seen as an active power that selects elements for conscious contemplation. But the dogma that the mind is the brain and the brain is a computer, is only an assumption. When a different approach can make sense of so many aspects of a problem, it should take more than stubborn dogma to defeat it.

EVOLANG XII (2018): Call for Papers

A replicated typo - Fri, 06/02/2017 - 10:00

The 12th International Conference on the Evolution of Language invites substantive contributions relating to the evolution of human language.

IMPORTANT DATES
Abstract submission: 1 September 2017 Add deadline to calendar
Notification of acceptance: 1 December 2017
Early-bird fee: 31 December 2017
Conference: 16-19 April 2018

Submission Information
Submissions may be in any relevant discipline, including, but not limited to: anthropology, archeology, artificial life, biology, cognitive science, genetics, linguistics, modeling, paleontology, physiology, primatology, philosophy, semiotics, and psychology. Normal standards of academic excellence apply. Submitted papers should aim to make clear their own substantive claim, relating this to the relevant, up to date scientific literature in the field of language evolution. Submissions should set out the method by which the claim is substantiated, the nature of the relevant data, and/or the core of the theoretical argument concerned. Novel and original theory-based submissions are welcome. Submissions centred around empirical studies should not rest on preliminary results.

Please see http://evolang.org/submissions for submission templates and further guidance on submission preparation. Submissions can be made via EasyChair (https://easychair.org/conferences/?conf=evolang12) by SEPTEMBER 1, 2017 for both podium presentations (20 minute presentation with additional time for discussion) and poster presentations. All submissions will be refereed by at least three relevant referees, and acceptance is based on a scoring scheme pooling the reports of the referees. In recent conferences, the acceptance rate has been about 50%. Notification of acceptance will be given by December 1, 2017.

For any questions regarding submissions to the main conference please contact scientific-committee@evolang.org.

Workshops: in addition to the general session, EVOLANG XII will host up to five thematically focused, half-day workshops. See here for the Call for Workshops.

Deadline extended for Triggers of Change in the Language Sciences

A replicated typo - Thu, 06/01/2017 - 11:15

The deadline for the 2nd XLanS conference on Triggers of Change in the Language Sciences has extended its submission deadline to June 14th.

This year’s topic is ‘triggers of change’:  What causes a sound system or lexicon or grammatical system to change?  How can we explain rapid changes followed by periods of stability?  Can we predict the direction and rate of change according to external influences?

We have also added two new researchers to our keynote speaker list, which now stands as:

 

Wh-words sound similar to aid rapid turn taking

A replicated typo - Tue, 05/30/2017 - 12:14

A new paper by Anita Slonimska and myself attempts to link global tendencies in the lexicon to constraints from turn taking in conversation.

Question words in English sound similar (who, why, where, what …), so much so that this class of words are often referred to as wh-words. This regularity exists in many languages, though the phonetic similarity differs, for example:

English Latvian Yaqui Telugu haw ka: jachinia elaa haw mɛni tsik jaikim enni haw mətʃ tsik jaiki enta wət kas jita eem;eemi[Ti] wɛn kad jakko eppuDu wɛr kuɾ jaksa eTa; eedi; ekkaDa wɪtʃ kuɾʃ jita eevi hu kas jabesa ewaru waj ˈkaːpeːts jaisakai en[du]ceeta; enduku

In her Master’s thesis, Anita suggested that these similarities help conversation flow smoothly.  Turn taking in conversation is surprisingly swift, with the usual gap between turns being only 200ms.  This is even more surprising when one considers that the amount of time it takes to retrieve, plan and begin pronouncing one word is 600ms.  Therefore, speakers must begin planning what they will say before current speaker has finished speaking (as demonstrated by many recent studies, e.g. Barthel et al., 2017). Starting your turn late can be interpreted as uncooperative, or lead to missing out on a chance to speak.

Perhaps the harshest environment for turn-taking is answering a content question.  Responders must understand the question, retrieve the answer, plan their utterance and begin speaking.  It makes sense to expect that cues would evolve to help responders recognise that a question is coming.  Indeed there are many paralinguistic cues, such as rising intonation (even at the beginning of sentences) and eye gaze.  Another obvious cue is question words, especially when they appear at the beginning of question sentences. Slonimska hypothesised that wh-words sound similar in order to provide an extra cue that a question is about to be asked, so that the speaker can begin preparing their turn early.

We tried to test this hypothesis, firstly by simply asking whether wh-words really do have a tendency to sound similar within languages.  We combined several lexical databases to produce a word list for 1000 concepts in 226 languages, including question words.  We found that question words are:

  • More similar within languages than between languages
  • More similar than other sets of words (e.g. pronouns)
  • Often composed of salient phonemes

Of course, there are several possible confounds, such as languages being historically related, and many wh-words being derived from other wh-words within a language. We attempted to control for this using stratified permutation, excluding analysable forms, and comparing wh words to many other sets of words such as pronouns which are subject to the same processes.  Not all languages have similar-sounding wh-words, but across the whole database the tendancy was robust.

Another prediction is that the wh-word cues should be more useful if they appear at the beginning of question sentences.  We tested this using typological data on whether wh-words appear in initial position.  While the trend was in the right direction, the result was not significant when controlling for historical and areal relationships.

Despite this, we hope that our study shows that it is possible to connect constraints from turn taking to macro-level patterns across languages, and then test the link using large corpora and custom methods.

Anita will be presenting an experimental approach to this question at this year’s CogSci conference.  We show that /w,h/ is a good predictor of questions in real English conversations, and that people actually use /w,h/ to help predict that a question is coming up.

Slonimska, A., & Roberts, S. G. (2017). A case for systematic sound symbolism in pragmatics: Universals in wh-words. Journal of Pragmatics, 116, 1-20. ArticlePDF.

All data and scripts are available in this github repository.

Which came first: Words or Syllables?

Babel's Dawn - Thu, 05/25/2017 - 00:25

Back when this blog was starting out I reported on a paper given by Judy Kegl (now Judy Shepard-Kegl) at a conference in South Africa. Kegl is an expert on sign language and had observed a new sign language emerge at a school for the deaf in Nicaragua. She listed four innate qualities that lead to language: (1) love of rhythm or prosody, (2) a taste for mirroring (imitation), (3) an appetite for linguistic competence, and (4) the wish to be like one’s peers. I found this an interesting and plausible list and have wondered why I don’t see more references to it. Rereading that old post has made the silence more comprehensible. It is entirely human and childish and has nothing to do with computation, or syntax, or conditioning.

The scene it brings to my mind is of a playground during class recess. The kids are lined up playing jump rope, chanting their rhymes as the rope twirls. Dashing in, making their leaps and dashing off. It is non-serious, but recognizably human. Other animals rough-house and tumble together but they do not form rhythmic play groups. We are too pompous to look to playgrounds for information about our own natures.I was reminded of that old post when I read a paper by Wendy Sandler, “What comes first in language emergence?” which is included in a volume entitled Dependencies in Language. She offers the following provocative sentence:

The pattern of emergence we see [in sign languages] suggests that the central properties of language that are considered universal—phonology and autonomous syntax—do not come ready-made in the human brain, and that a good deal of language can be present without clear evidence of them. [page 65]

She explains that in established sign languages, words do have a “phonological” structure. That is a set of ways of holding the hand and using the face and body to create and differentiate words. She offers this example from Israeli sign language. The signs for send and tattle call for the same hand gesture, but one is held away from the body and the other is close to the mouth. The spatial location is the equivalent of a spoken distinction between cattle and rattle. One phoneme makes all the difference. There are also body movements that achieve the same effect as intonation. Established sign languages have a clear duality of patterning, i.e., a level of repeated signs that are meaningless in themselves but gain meaningful when combined in agreed upon ways.

A recently developed sign language, Al-Sayyid Bedouin Sign Language (ABSL) shows that the first signers did not have these “phonemes” from the beginning. Even now when the first signers are old they still use only hands to make words and do not make distinctions based on location or body movements. A dictionary of 300 signs in ABSL fails to show any “evidence of a discrete, systematic meaningless level of structure.” [71] Among the youngest signers, however, indications of phonemes have begun to appear, not so much to differentiate between words as to make articulation of a word easier. I must comment, however, that ease of articulation may vary by culture. I was always taught that we said an apple instead of a apple, because it is easier to say if you slip a consonant between the two vowels. Then I found myself trying to explain the rule to Swahili speakers where double vowels are routinely articulated as two consecutive sounds with no consonant in between. So the issue of ease of articulation suggests to me that some culture-bound norms may be making their way into ABSL. But this is beside the main point which is that a language at the beginning needn’t have a meaningless layer. It might start with whole words.

This suggestion is a radical departure from standard linguistics which puts phonology at the base of the pyramid supporting a language and meaning up at the top. At the same time, it may not surprise parents whose children start saying individual words long before they master the rules of pronunciation. I can even look back on my own Swahili training which had me uttering phrases right away, even when sound system was so baffling I had a hard time just repeating a new polysyllabic word. So Sandler’s position is simultaneously radical yet not surprising.

After words we get syntax and prosody (intonation, timing and stress). Classical linguistics puts syntax before prosody but Sandler says in ABSL prosody came first. The earliest signers had no way of expressing complex sentences but put a pause between one or two-word phrases. By now the signs are much more complex, but the kind of syntax that imposes a Chomskyan structure on sentences has yet to appear. So once again Sandler finds the reverse of the linguists’ expectations. Although again I don’t suppose many parents will be surprised. Children don’t start using syntax until age 3, but tones of voice and intonation are immediately apparent.

We have to be cautious about these arguments. It is not immediately clear that signing and speech follow the same path. Children make many meaningless sounds before they start using words and our remote ancestors may have babbled for a million years before they got around to forming words. A number of linguists, most prominently Dereck Bickerton, have studied the creation of creole languages. The initial vocabulary comes from an existing pidgin which combines words from multiple languages so there too we have words before phonology. I wonder what Bickerton would say about prosody before syntax.

The critical point is the commonsense one that from the beginning the effort of human communicators is to produce meaningful utterances. Meaning in the form of words comes before a settled phonology, and when words start appearing in strings, meaning again precedes any abstract structure. The idea that these findings could surprise anybody shows us how far linguistics strayed from the plausible when it decided to study structures first and meanings later.

Call for Posters – Minds, Mechanisms and Interaction in the Evolution of Language

A replicated typo - Mon, 05/22/2017 - 12:55

The workshop “Minds, Mechanisms and Interaction in the Evolution of Language” will be hosted at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands on 21st-22nd September 2017. The workshop will include a poster session on topics related to the themes of the meeting. We are interested in contributions investigating the emergence and evolution of language, specifically in relation to interaction.

We are looking for work in the following areas:

  • biases and pre-adaptations for language and interaction
  • cognitive and cultural mechanisms for linguistic emergence
  • interaction as a driver for language evolution

We invite submissions of abstracts for posters, particularly from PhD students and junior researchers.

Please submit an abstract of no more than 300 words (word count not including references) by email to hannah.little@mpi.nl.  Please include a title, authors, affiliations and contact email addresses.  

Deadline: July 9th 2017

Outcome of decision process by: 24th July

Abstracts will be reviewed by the workshop committee.

The poster session will take place on the evening of Thursday September 21st 2017.

Registration is free (details to follow).

Plenary speakers:

  • David Leavens, University of Sussex
  • Jennie Pyers, Wellesley College
  • Monica Tamariz, Heriot Watt University

The workshop also includes presentations from the Levinson group (Language Evolution and Interaction Scholars of Nijmegen)  and an introduction by Stephen Levinson himself!

Summer school:

The workshop will also be bookended with a summer school on 20th and 23rd September specifically aimed at PhD students. The school will consist of a short tutorial series covering experimental and statistical methods that should be of broad interest to a general audience, though focussed around the theme of the workshop. In this tutorial series, we will cover all aspects of creating, hosting, and analysing the data from a set of experiments that will be run live (online) during the workshop! More details for the summer school and registration will follow.

2 PhD positions available with Bart de Boer in Brussels!

A replicated typo - Wed, 05/17/2017 - 11:08

Two PhD positions are available in the AI lab at the Vrije Universiteit Brussel with Bart de Boer.

One position is on modelling an emerging sign language:

We are looking for a PhD student to work on modeling the emergence of sign languages, with a focus on modeling the social dynamics underlying existing signing communities.  The project relies on specialist expertise of the Kata Kolok signing community that has emerged in a Balinese village over the course of several generations. The emergence of Kata Kolok, and the demographics of the village have been closely studied by geneticists, anthropologists, and linguists. A preliminary model has been built in Python, simulating this emergence. The aim of the project is to investigate, using a combination of linguistic field research and computational modeling which factors – cultural, genetic, linguistic and others –  determine the way language emerges. There will be one PhD student in Nijmegen conducting primary field research on Kata Kolok and one based in Brussels (as advertised here) to be involved in the computational aspect of the project. Both positions are part of a FWO-NWO funded collaboration of the Artificial Intelligence lab of the Vrije Universiteit Brussel and the Center for Language Studies at Radboud University Nijmegen and the advertised position is supervised by profs. Bart de Boer and Connie de Vos.

Advertisement here: https://ai.vub.ac.be/PhDKataKolok

The other is on modelling acquisition of speech:

We are looking for someone who has (or who is about to complete) a master’s degree in artificial intelligence, speech technology, computer science or equivalent. You will work on a project that investigates advanced techniques for learning the building blocks of speech, with a focus on spectro-temporal features and dynamic Bayesian networks. It is part of the Artificial Intelligence lab of the Vrije Universiteit Brussel and is supervised by prof. Bart de Boer.

Advertisement here: https://ai.vub.ac.be/PhD_Spectrotemporal_DBN

The deadline for application is 1st July 2017. Other details available at the links above.

Questions about details of the positions themselves should be directed to Bart de Boer (bart@arti.vub.ac.be). However, I myself did my PhD with Bart at the VUB, so I’d also be happy to answer more informal questions about working in the lab/living in Belgium/other things (hannah@ai.vub.ac.be).

News From Prairie Dog Town

Babel's Dawn - Mon, 05/15/2017 - 19:16

This week’s New York Times Sunday Magazine has an article titled Can Prairie Dogs Talk? The answer is so obviously no, that one is forced to read it to see what kind of case the author can make. Turns out he makes an interesting one.

It is easy to come up with a definition of language that bars prairie-dogese. If you define a language as the set of sentences that can be generated by its syntactical rules, why then the answer is still no. Prairie dogs do not speak in sentences and appear to have no generative syntax. But I don’t define language that way.

A biologist with the unusual name of Constantine Slobodchikoff has been studying prairie dogs for decades. He has demonstrated that the varmints make a distinct sound when people appear and a different sound when a coyote appears. This kind of thing has long been known about vervet monkeys and suggests the minimum that one can try to pass off as a language.

However, the vervet sounds appear to be innate rather than learned and prompt different reactions: tree climbing in case of the leopard warning, ducking under bushes for the hawk or eagle warning, and bolting upright while checking the ground for a snake warning. Maybe these shouts are words, although there is no reason to insist on it. The vervets have a small set of distinctive warnings that produce different responses in the listeners. Perhaps prairie dogs have something similar, except for the fact that their response to pretty much any danger is to run down into their underground world.

Let’s remind ourselves that in human language, words are cultural inventions that can be understood in a context. Some words like into and doesn’t require other words if they are to mean anything at all, and some words like table have a default meaning, but if said alone are likely to provoke a response along the lines of What about a table? Or perhaps the speaker is eighteen-months old, in which case the response may be yes, it’s a ping-pong table. It is going to take more than this to argue that vervets or prairie dogs have words.

Then Slobodchikoff took his work a step further, sending students wearing different colored tee shirts to wander amongst the prairie dog grounds. The animals made their usual human warning but combined it with another sound that varied according to tee-shirt color. It seems the prairie dogs might be speaking phrases. Some scholars object to the proposal because there is no visible reaction to the different bits of color information, but that point misses one of the unusual features of language. Language needn’t produce any reaction at all in the audience. I can sit in a chair for two hours reading a book, showing no response beyond the occasional turning of a page. Yet I can then arise from my chair a changed man. The only way you might discover the change is three months later I say something based on that book.

Suppose that some humans belong to an environmental group and when they go hiking they wear their organization’s green tee shirt. They pride themselves on the way they don’t mess with nature, and they pose no threat to the prairie dog community. But perhaps there is another group, this one a gang of trouble makers who wear red shirts and like to indulge in sadistic torment of the prairie dog, pouring gasoline down the holes and setting it on fire. Suddenly the human/green and human/red cries carry important, distinctive bits of information. I do not know, and the article does not say, but it suggests an area for further research: do the prairie dogs ever make use of the information in their distinctive warning cries?

In my last post I discussed “The Ultimate Test” of a theory about language: could such a system evolve? Is it possible for prairie dogs to have evolved the ability to speak in phrases about events on the earth’s surface? Unfortunately, these animals lead the most important part of their lives underground where observation is difficult to conduct. Just how communal are these animals? They do not seem to be like mole rats, another burrowing mammal, that live like eusocial insects. Mole rats have a reproductive queen while the other females do the labor of the colonies. I know nothing about their communication system, but eusocial insects use elaborate chemical systems to pass the news. Prairie dogs are not this cooperative, but maybe they are at least humanly communitarian, in the sense of sharing a great deal of the burden, joys, food, and information of life with their fellows. If they are heavily into sharing, I can easily imagine an evolutionary process that leads to sharing information through some kind of phrase-based system. But prairie dogs do not seem to live that way. A quick check of Wikipedia shows they are family-based, with rivalries between families and breeding groups. “A prairie dog town may contain 15-26 family groups.” A mix of families that do not share much between them does not sound like fertile soil for evolving a communication system much beyond warning alarms.

Warning sounds are a valuable group benefit that have evolved many times in many non-communal species, and if prairie dogs can make complex, phrase-like warnings, that is an interesting discovery. But warning sounds, even complex ones, are not going to satisfy this blog’s understanding of language.

The definition of language that is used on this blog proposes a triad of (1) a speaker (or signer or writer) and (2) a listener (or observer or reader) paying joint attention to (3) a topic. The prairie dog does not have this triad. First, there seem to be innumerable speakers, who—at best—pass along a message, but more likely act on a reflex that sets many of them to issuing the same signal.

Second, it is not clear that any of the listeners pay joint attention to the intruder. They hear the warning and zip back underground. Like vervets they respond appropriately, but do not seem to attend to it in the same way the speaker does. Why would they? The alarm is a cry for action, not a bit of social bonding.

Finally, they have only one topic: intruders. It may be unfair to demand that animals with a grape-sized brain take much interest in the nature of the world around them, but without it they are not going to discuss many topics.

The triad requires a special kind of community, one in which members are willing to inform one another of secret information, and one in which members are willing to listen and trust what other members report. It also requires that community members be capable of addressing any topic that seems worthy of study. A language restricted to various forms of shouting, “Look out,” would be no language at all.

It is always good to be reminded that we build our nature on powers scattered throughout the biological world, but it is also good to keep in mind that there is a reason for having and not having a power. We live in cultural niches and need language to survive in them. Prairie dogs can do many things that I cannot do, and apparently they can spread the word about my tee-shirt color, but they are not on the royal road to Shakespeare.

Iconicity evolves by random mutation and biased selection

A replicated typo - Mon, 05/15/2017 - 13:52

A new paper by Monica Tamariz, myself, Isidro Martínez and Julio Santiago uses an iterated learning paradigm to investigate the emergence of iconicity in the lexicon.  The languages were mappings between written forms and a set of shapes that varied in colour, outline and, importantly, how spiky or round they were.

We found that languages which begin with no iconic mapping develop a bouba-kiki relationship when the languages are used for communication between two participants, but not when they are just learned and reproduced.  The measure of the iconicity of the words came from naive raters.

Here’s one of the languages at the end of a communication chain, and you can see that the labels for spiky shapes ‘sound’ more spiky:

An example language from the final generation of our experiment: meanings, labels and spikiness ratings.

These experiments were actually run way back in 2013, but as is often the case, the project lost momentum.  Monica and I met last year to look at it again, and we did some new analyses.  We worked out whether each new innovation that participants created increased or decreased iconicity.  We found that new innovations are equally likely to result in higher or lower iconicity: mutation is random.  However, in the communication condition, participants re-used more iconic forms: selection is biased.  That fits with a number of other studies on iconicity, including Verhoef et al., 2015 (CogSci proceedings) and Blasi et al. (2017).

Matthew Jones, Gabriella Vigliocco and colleagues have been working on similar experiments, though their results are slightly different.  Jones presented this work at the recent symposium on iconicity in language and literature (you can read the abstract here), and will also present at this year’s CogSci conference, which I’m looking forward to reading:

Jones, M., Vinson, D., Clostre, N., Zhu, A. L., Santiago, J., Vigliocco, G. (forthcoming). The bouba effect: sound-shape iconicity in iterated and implicit learning. Proceedings of the 36th Annual Meeting of the Cognitive Science Society.

Our paper is quite short, so I won’t spend any more time on it here, apart from one other cool thing:  For the final set of labels in each generation we measured iconicity using scores from nieve raters, but for the analysis of innovations we had hundreds of extra forms.  We used a random forest to predict iconicity ratings for the extra labels from unigrams and bigrams of the rated labels.  It accounted for 89% of the variance in participant ratings on unseen data.  This is a good improvement over some old techniques such as using the average iconicity of the individual letters in the label, since random forests allows the weighting of particular letters to be estimated from the data, and also allows for non-linear effects when two letters are combined.

However, it turns out that most of the prediction is done by this simple decision tree with just 3 unigram variables. Shapes were rated as more spiky if they contained a ‘k’, ‘j’ and ‘z’ (our experiment was run in Spanish):

So the method was a bit overkill in this case, but might be useful for future studies.

All data and code for doing the analyses and random forest prediction is available in the supporting information of the paper, or in this github repository.

Tamariz, M., Roberts, S. G., Martínez, J. I. and Santiago, J. (2017), The Interactive Origin of Iconicity. Cogn Sci. doi:10.1111/cogs.12497[pdf from MPI]

Syndicate content