Science in general, Sound-symbolism, Uncategorized

Sound-symbolism boosts novel word learning: the MS Paint version

I have a new article out!

Gwilym Lockwood, Mark Dingemanse, and Peter Hagoort. 2016. “Sound-Symbolism Boosts Novel Word Learning.” Journal of Experimental Psychology: Learning, Memory, and Cognition. doi:10.1037/xlm0000235 (download link, go on, it’s only eight pages)

and I’m particularly proud of this one because:

a) it’s a full article discussing some of the stats I’ve been talking about at conferences for almost two years, and

b) it’s probably the only scientific article to formally cite Professor Oak’s Pokédex.

So, if you like things like iconicity and logit mixed models and flawed experiments cunningly disguised as pre-tests that I meant to do all along, you can read it here.

Enough of that, though. I know that what you’re really here for is Sound-symbolism boosts novel word learning: the MS Paint version.

The first thing we did was to select our words from almost a hundred ideophones and arbitrary adjectives. Participants heard the Japanese word, then saw two possible translations – one real, one opposite – and they had to guess which the correct one was. This was pretty easy for the ideophone task. People can generally guess the correct meaning with some certainty, because it just kind of sounds right for one of the options (due to the cross-modal correspondences between the sound of the word and its sensory meaning). It was a fair bit harder for the arbitrary adjectives, where there are no giveaways in the sound of the word.

2AFC stimuli selection

It’s kind of taken for granted in the literature that people can guess the meanings of ideophones at above chance accuracy in a 2AFC test, but I’ve always struggled to find a body of research which shows this. This pre-test shows that people can indeed guess ideophones at above chance accuracy in a 2AFC test – at 63.1% accuracy (μ=50%, p<0.001) across 95 ideophones, in fact. So, now, anybody who wants to make that claim has the stats to do so. Nice. We’re now rerunning this online with thousands of people as part of the Groot Nationaal Onderzoek project, so stay tuned for more on that.

Then, two different groups did a learning task. We originally had the learning task as a 2AFC set up where participants learned by guessing and then getting feedback. In terms of results, this did work… but about a third of the participants realised that they could “learn” by ignoring the Japanese words completely and just remembering to pick fat when they saw the options fat and thin. Damn.

2AFC failed test

Anyway. We got two more groups in to do separate learning and test rounds with a much better design. One group got all the ideophones, half with their real meanings, half with their opposite meanings. The other group got all the arbitrary adjectives, half with their real meanings, half with their opposite meanings.

In the same way that it’s easy to guess the meanings of the ideophones, we predicted that the ideophones with their real translations would be easy to learn because of the cross-modal correspondences between linguistic sound and sensory meaning…

concept sounds participants real trimmed

…that the ideophones with their opposite translations would be hard to learn, because the sounds and meanings clash rather than match…

concept sounds participants opposite trimmed

…and that there wouldn’t be much difference between conditions for the arbitrary adjectives, because there’s no real association between sound and meaning in arbitrary words anyway.

concept sounds participants arbitrary trimmed

And sure enough, that’s exactly what we found. Participants were right 86.1% of the time for ideophones in the real condition, but only 71.1% for ideophones in the opposite condition. With the arbitrary adjectives, it was 79.1% versus 77%, which isn’t a proper difference.

Additional bonus for replication fans! (that’s everybody, right?): in a follow-up EEG experiment doing this exact same task with Japanese ideophones, another 29 participants got basically the same results (86.7% for the real condition, 71.3% for the opposite condition). That’s going to be submitted in the next couple of weeks.

Here’s the histogram from the paper… but in glorious technicolour:

accuracy for each condition with both experiments (colour) updated

(It would have cost us $900 to put one colour figure in the article, even though it’s the publisher who’s printing it and making money from it. The whole situation is quite silly.)

The point of this study is that it’s easier to learn words that sound like what they mean than words that don’t sound like what they mean, and that words that don’t particularly sound like anything are somewhere in the middle. This seems fairly obvious, but people have assumed for a long time that this doesn’t really happen. There’s been a fair bit of research about onomatopoeia and ideophones helping babies learn their first language, but not that much yet about studies with adults. It also provides some support for the broader suggestion that we use similar sounds to talk about and understand sensory things across languages, but not so much for other things, so words with sound-symbolism may well have been how language started out in the first place.

I’d love to re-run this study on a more informal (and probably unethical) basis where a class of school students learning Japanese are given a week to learn the same word list for a vocab test where they’d have to write down the Japanese words on a piece of paper. I reckon that there’d be the same kind of difference between conditions, but it’d be nice to see that happen when they really have to learn the words to produce a week later, not just recognise a few minutes later. If anybody wants to offer me a teaching position at a high school where I can try this out and probably upset lots of parents, get in touch; I need a job when my PhD contract runs out in August.

The thing I find funniest about this entire study is that when I was studying Japanese during my undergrad degree, I found ideophones really difficult to learn. I thought they all sounded kind of the same, and pretty daft to boot. The ideophone for “exciting/excited” is wakuwaku, which I felt so uncomfortable saying that I feigned indifference about things in oral exams to avoid saying it (but to be fair, feigned indifference was my approach to most things in my late teens and early twenties). There’s probably an ideophone to express the internal psychological conflict you get when you realise you’re doing a PhD in something you always tried to ignore during your undergrad degree, but I’m not sure what it is. I’ll bet my old Japanese lecturers would be pretty niyaniya if they knew, though.


(almost) everything you ever wanted to know about sound-symbolism research but were too afraid to ask.

Publications are like buses. Not because you spend most of your PhD with no publications then two turn up at once (although that is what’s just happened to me), but because you might get overtaken by another bus going the same way, and you might want to be somewhere else by the time you get to your original destination.

The bus I’ve just taken is my new review paper:

Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: a review of behavioral, developmental, and neuroimaging research into sound-symbolism. Language Sciences, 1246.

I wrote it along with Mark Dingemanse, my supervisor at the Max Planck Institute. It covers experimental research on sound-symbolism from the last few years and pulls together the main themes and findings so far. To summarise, these are:

  1. That large vowels (e.g. a, o) are associated with large things and slow things and dark things and heavy things
  2. That small vowels (e.g. i, e) are associated with small things and fast things and bright things and light things
  3. That voiced consonants (e.g. b, g) have the same kind of associations as large vowels
  4. That voiceless consonants (e.g. p, k) have the same kind of associations as small vowels
  5. That this is probably due to a combination of acoustic properties (i.e. the way something sounds when you hear it) and articulatory properties (i.e. the way something feels when you say it)
  6. That these cross-modal associations mean people can guess the meanings of sound-symbolic words in languages that they don’t know
  7. That these cross-modal associations mean children and adults learn sound-symbolic words more easily
  8. That these cross-modal associations in sound-symbolic words elicit either different brain processes from regular words and/or stronger versions of the same brain processes as regular words
  9. That it’s more informative to investigate these cross-modal associations using real sound-symbolic words from real languages than using non-words from made-up languages
  10. That it’s more informative to investigate these cross-modal associations using complicated experiment tasks than asking participants to choose between two options
  11. That it’s not accurate to look at arbitrariness and iconicity are two competitors in a zero-sum language game, even if it does make our work seem more important

We’re pretty happy with this, and the paper is a nice one-stop shop for everything you’ve ever wanted to know about sound-symbolism research but were too afraid to ask. We don’t finish it off with a grand model of how it works, because we don’t really know (and because I’ve still got at least two more experiments to do in my PhD before I’ll have a decent idea), but we do collect a lot of individual strands of research into a few coherent themes which should be useful for anybody else who’s doing similar stuff.

Even though it’s hot off the press this morning, it’s taken a long time to get to this stage. I started doing all the reading and the writing in spring 2014, then Mark and I restructured it quite a lot, and then it got put on the back burner while I read more things and did more things. We came back to it at the start of this year, added and changed a few things, and submitted it earlier this summer. After a fairly quick and painless review process, it’s now out.

The first frustration is that there was a small but important misprint in the text; it’s frustrating that it’s there, it’s frustrating that it slipped past the two authors, two reviewers, and editor, and it’s frustrating that Frontiers won’t amend it (despite being an online-only journal). In this misprint, we accidentally misreport Moos et al. (2014). They found that people associate the vowel [a] with the colour red, and that this colour association becomes more yellow/green as the vowel gets smaller (like the vowel [i]). However, we wrote this the wrong way round in the text and accompanying figure. So, here’s the correct version of Figure 1 from the review paper:

cross-modal mappings - vowel space (bw) for distribution


Secondly, since submitting the article and having the positive reviews back, I’ve come across two studies in particular which I wish we could have included but couldn’t because we were already on that bus. These studies are:

Sidhu, D. M., & Pexman, P. M. (2015). What’s in a Name? Sound Symbolism and Gender in First Names. PloS One, 10(5), e0126809. starts and ends with the Shakespeare quote about roses by different names smelling as sweet to describe arbitrariness and iconicity, which is a quote I’ve always wanted to use myself, so good on them)

Jones, M., Vinson, D., Clostre, N., Zhu, A. L., Santiago, J., & Vigliocco, G. (2014). The bouba effect: sound-shape iconicity in iterated and implicit learning. In Proceedings of the 36th Annual Meeting of the Cognitive Science Society. (pp. 2459–2464). Québec.(which I’d seen referred to in various presentations as a work in progress, but I hadn’t come across the actual, citable CogSci conference paper until a couple of weeks ago)

Both these studies investigate the kiki/bouba effect, which is the way people associate spiky shapes with spiky sounds (i.e. small vowels and voiceless consonants) and round shapes with round sounds (i.e. rounded vowels like o and voiced consonants). Both studies have well-designed methods which are quite complicated to explain but address the questions really well, and find similar things. The original kiki/bouba studies found the split between round and spiky from making people choose between two options, and so people chose round shapes with round sounds and spiky shapes with spiky sounds. Simple enough.

However, these two studies show that roundness and spikiness don’t contribute equally to the effect. Rather, there’s a massive effect of roundness, while the associations between spiky sounds and spikiness is much less strong, and may even just be an association by default because it was the other option in the original studies.I’d then have included another paragraph or two in the review paper about how future studies can and should address whether the associations outlined in points 1-4 fall along an even continuum (in the way that size associations seem to fall evenly between i and a) or whether one particular feature is driving the effect (in the way that roundness drives the round/spiky non-continuum). Sadly, I only came across these studies after it was too late to include them, but hopefully they’ll be picked up on by others in future!

EEG/ERP, Sound-symbolism

Ideophones in Japanese modulate the P2 and late positive complex responses: MS Paint version

I just had my first paper published:

Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Language Sciences, 933.

It’s completely open access, so have a look (and download the PDF, because it looks a lot nicer than the full text).

It’s a fuller, better version of my MSc thesis, which means that I’ve been working on this project on and off since about April 2013. Testing was done in June/July 2013 and November 2013. Early versions of this paper have been presented at an ideophone workshop in Tokyo in December 2013, a synaesthesia conference in Hamburg in February 2014, and a neurobiology of language conference in Amsterdam in August 2014. It was rejected once from one journal in August 2014, and was submitted to this journal in October 2014. It feels great to have it finally published, but also kind of anticlimactic, given that I’m focusing on some different research now.

I feel like the abstract and full article describe what’s going on quite well; this is a generally under-researched area within the (neuro)science of language as it is, so it’s written for the sizeable number of people who aren’t knowledgeable about ideophones in the first place. However, if you can’t explain your research using shoddy MS Paint figures, then you can’t explain it at all, so here goes.

Ideophones are “marked words which depict sensory imagery” (Dingemanse, 2012). In essence, this means that ideophones stick out compared to regular words, ideophones are real words (not just off the cuff onomatopoeia), ideophones try and imitate the thing they mean rather than just describing it, and ideophones mean things to do with sensory experiences. This sounds like onomatopoeia, but it’s a lot more than that. Ideophones have been kind of sidelined within traditional approaches to language because of a strange fluke whereby the original languages of academia (i.e. European languages, and especially French, German, and English) are from one of the very few language families across the world which don’t have ideophones. Since ideophones aren’t really present in the languages of the people who wrote about languages most often, those writers kind of just ignored them. The less well-known linguistic literature on ideophones has been going on for decades, and variously describes ideophones as vivid, quasi-synaesthetic, expressive, and so on.

What this boils down to is that for speakers of languages with ideophones, listening to somebody say a regular word is like this:

listening to a regular word

and listening to somebody say an ideophone is like this:

listening to an ideophone

Why, though?

Ideophones are iconic and/or sound-symbolic. These terms are slightly different but are often used interchangeably and both mean that there’s a link between the sound of something language-y (or the shape/form of something language-y in signed languages) and its meaning. This means that, when you’re listening to a regular word, you’re generally just relying on your existing knowledge of the combinations of sounds in your language to know what the meaning is:

regular word processing

…whereas when a speaker of a language with ideophones listens to an ideophone, they feel a rather more direct connection between what the ideophone sounds like and what the meaning of the ideophone is:

ideophone processing

These links between sound and meaning are known as cross-modal correspondences.

Thing is, it’s one thing for various linguists and speakers of languages with ideophones to identify and describe what’s happening; it’s quite another to see if that has any psycho/neurolinguistic basis. This is where my research comes in.

I took a set of Japanese ideophones (e.g. perapera, which means “fluently” when talking about somebody’s language skills; I certainly wish my Japanese was a lot more perapera) and compared them with regular Japanese words (e.g. ryuuchou-ni, which also means “fluently” when talking about somebody’s language skills, but isn’t an ideophone). My Japanese participants read sentences which were the same apart from swapping the ideophones and the arbitrary words around, like:

花子は ぺらぺらと フランス語を話す
Hanako speaks French fluently (where “fluently” = perapera).

花子は りゅうちょうに フランス語を話す
Hanako speaks French fluently (where “fluently” = ryuuchou-ni).

While they read these sentences, I used EEG (or electroencephalography) to measure their brain activity. This is done by putting a load of electrodes in a swimming cap like this:

electrode set up

After measuring a lot of participants reading a lot of sentences in the two conditions, I averaged them together to see if there was a difference between the two conditions… and indeed there was:

figure 1 from japanese natives paper

The red line shows the brain activity in response to the ideophones, and the blue line shows the brain activity in response to the arbitrary words. The red line is higher than the blue line at two important points; the peak at about 250ms after the word was presented (the P2 component), and the consistent bit for the last 400ms (the late positive complex).

Various other research has found that a higher P2 component is elicited by cross-modally congruent stimuli… i.e. this particular brain response is bigger to two things that match nicely (such as a high pitched sound and a small object). Finding this in response to the Japanese ideophones suggests that the brain recognises that the sounds of the ideophones cross-modally match the meanings of the ideophones much more than the sounds of the arbitrary words match the meanings of the arbitrary words. This may be why ideophones are experienced more vividly than arbitrary words.

higher P2 for ideophones

lower P2 for arbitrary words

As for the late positive complex, it’s hard to say. It could be that the cross-modal matching of sound and meaning in ideophones actually makes it harder for the brain to work out the ideophone’s role in a sentence because it has to do all the cross-modal sensory processing on top of all the grammatical stuff it’s doing in the first place. It’s very much up for discussion.