EEG/ERP, Sound-symbolism

Ideophones in Japanese modulate the P2 and late positive complex responses: MS Paint version

I just had my first paper published:

Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Language Sciences, 933. http://doi.org/10.3389/fpsyg.2015.00933

It’s completely open access, so have a look (and download the PDF, because it looks a lot nicer than the full text).

It’s a fuller, better version of my MSc thesis, which means that I’ve been working on this project on and off since about April 2013. Testing was done in June/July 2013 and November 2013. Early versions of this paper have been presented at an ideophone workshop in Tokyo in December 2013, a synaesthesia conference in Hamburg in February 2014, and a neurobiology of language conference in Amsterdam in August 2014. It was rejected once from one journal in August 2014, and was submitted to this journal in October 2014. It feels great to have it finally published, but also kind of anticlimactic, given that I’m focusing on some different research now.

I feel like the abstract and full article describe what’s going on quite well; this is a generally under-researched area within the (neuro)science of language as it is, so it’s written for the sizeable number of people who aren’t knowledgeable about ideophones in the first place. However, if you can’t explain your research using shoddy MS Paint figures, then you can’t explain it at all, so here goes.

Ideophones are “marked words which depict sensory imagery” (Dingemanse, 2012). In essence, this means that ideophones stick out compared to regular words, ideophones are real words (not just off the cuff onomatopoeia), ideophones try and imitate the thing they mean rather than just describing it, and ideophones mean things to do with sensory experiences. This sounds like onomatopoeia, but it’s a lot more than that. Ideophones have been kind of sidelined within traditional approaches to language because of a strange fluke whereby the original languages of academia (i.e. European languages, and especially French, German, and English) are from one of the very few language families across the world which don’t have ideophones. Since ideophones aren’t really present in the languages of the people who wrote about languages most often, those writers kind of just ignored them. The less well-known linguistic literature on ideophones has been going on for decades, and variously describes ideophones as vivid, quasi-synaesthetic, expressive, and so on.

What this boils down to is that for speakers of languages with ideophones, listening to somebody say a regular word is like this:

listening to a regular word

and listening to somebody say an ideophone is like this:

listening to an ideophone

Why, though?

Ideophones are iconic and/or sound-symbolic. These terms are slightly different but are often used interchangeably and both mean that there’s a link between the sound of something language-y (or the shape/form of something language-y in signed languages) and its meaning. This means that, when you’re listening to a regular word, you’re generally just relying on your existing knowledge of the combinations of sounds in your language to know what the meaning is:

regular word processing

…whereas when a speaker of a language with ideophones listens to an ideophone, they feel a rather more direct connection between what the ideophone sounds like and what the meaning of the ideophone is:

ideophone processing

These links between sound and meaning are known as cross-modal correspondences.

Thing is, it’s one thing for various linguists and speakers of languages with ideophones to identify and describe what’s happening; it’s quite another to see if that has any psycho/neurolinguistic basis. This is where my research comes in.

I took a set of Japanese ideophones (e.g. perapera, which means “fluently” when talking about somebody’s language skills; I certainly wish my Japanese was a lot more perapera) and compared them with regular Japanese words (e.g. ryuuchou-ni, which also means “fluently” when talking about somebody’s language skills, but isn’t an ideophone). My Japanese participants read sentences which were the same apart from swapping the ideophones and the arbitrary words around, like:

花子は ぺらぺらと フランス語を話す
Hanako speaks French fluently (where “fluently” = perapera).

花子は りゅうちょうに フランス語を話す
Hanako speaks French fluently (where “fluently” = ryuuchou-ni).

While they read these sentences, I used EEG (or electroencephalography) to measure their brain activity. This is done by putting a load of electrodes in a swimming cap like this:

electrode set up

After measuring a lot of participants reading a lot of sentences in the two conditions, I averaged them together to see if there was a difference between the two conditions… and indeed there was:

figure 1 from japanese natives paper

The red line shows the brain activity in response to the ideophones, and the blue line shows the brain activity in response to the arbitrary words. The red line is higher than the blue line at two important points; the peak at about 250ms after the word was presented (the P2 component), and the consistent bit for the last 400ms (the late positive complex).

Various other research has found that a higher P2 component is elicited by cross-modally congruent stimuli… i.e. this particular brain response is bigger to two things that match nicely (such as a high pitched sound and a small object). Finding this in response to the Japanese ideophones suggests that the brain recognises that the sounds of the ideophones cross-modally match the meanings of the ideophones much more than the sounds of the arbitrary words match the meanings of the arbitrary words. This may be why ideophones are experienced more vividly than arbitrary words.

higher P2 for ideophones

lower P2 for arbitrary words

As for the late positive complex, it’s hard to say. It could be that the cross-modal matching of sound and meaning in ideophones actually makes it harder for the brain to work out the ideophone’s role in a sentence because it has to do all the cross-modal sensory processing on top of all the grammatical stuff it’s doing in the first place. It’s very much up for discussion.

Advertisement
Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s