I’m not really one for new year’s resolutions, but they are a useful crutch for getting things done sometimes. And so, 2015 will herald the dawn of a brand new academic blog, packed full of information and insights from the business end of sound-symbolism and synaesthesia research, along with a sprinkling of observations and anecdotes about life in early academia in general.
December, though, is a great time to start. What better way to begin a new blog than tapping into the buzzfeed zeitgeist and have a listicle with gifs? Without further ado, I hereby present the moderately prestigious, barely anticipated, inaugural annual Papers of the Year awards listicle. In no particular order, here are the five most interesting and/or important papers I’ve read this year.
1. Behme (2014). “A ‘Galilean’ Science of Language.” Journal of Linguistics 50, no. 03: 671–704. doi:10.1017/S0022226714000061.
Far more august minds than mine have spilled lot of virtual ink over Behme’s book review … well, I say book review, but it’s more like a brief section on Chomsky’s book The Science of Language which is then used as a launchpad to critically assess Chomsky’s entire scholarship. From the strictly academic side of things, I’d say that the majority of the criticism is justified, although I’m not sure I agree with Behme’s rather absolutist stance that ignoring or discarding any single piece of evidence that conflicts with your theory is absolutely reprehensible and invalidates your entire research programme. To do so on a massive scale is of course problematic, but I think there is a little more leeway in linguistics than Behme makes out. This is also a really interesting paper because of the reactions it inspires. We had a journal club session in the Neurobiology of Language department at MPI about this paper, and it was fascinating to see people’s opinions about the tone and style. Some (myself included) believe that reviews like this are perfectly fine if the author accepts that they have to stand behind their rather direct points of view; others feel that the tone was aggressive and that there’s no place in science for this kind of attack. Either way, it’s beautifully written and addresses some hugely important and uncomfortable truths about the science of language and The Science of Language.
2. Revill, Namy, DeFife, and Nygaard (2014). “Cross-Linguistic Sound Symbolism and Crossmodal Correspondence: Evidence from fMRI and DTI.” Brain and Language 128, no. 1: 18–24. doi:10.1016/j.bandl.2013.11.002.
(no free .pdf available)
I’ve been reading and re-reading this paper quite a lot this year. It’s an fMRI study on sound-symbolism which finds increased activation for sound-symbolic words in the left superior parietal cortex, which the authors take to mean the engagement of cross-modal sensory integration networks. That is to say, it seems that monolingual native English speakers are able to integrate sound and sensory meaning when the sound of the word naturally fits the meaning. My experiments use a similar approach with EEG, so it was very exciting to read a paper which independently expressed the same kind of ideas using a different imaging technique. Sadly, the wider behavioural experiment which they used to test the stimuli hasn’t been published yet – I’m interested to see the variation in the words they used, as some words were from languages without much sound-symbolism (Dutch, for example), while other words were from languages with lots of ideophones (e.g. Yoruba). I’m looking forward to reading about that in more detail.
3. Skipper (2014). “Echoes of the Spoken Past: How Auditory Cortex Hears Context during Speech Perception.” Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1651: 20130297. doi:10.1098/rstb.2013.0297.
(open access paper available here)
This paper addresses context beyond language and asks why neuroimaging meta-analyses show that the auditory cortex is less active (and sometimes deactivated) when people listen to meaningful speech compared to less meaningful sounds. Skipper’s model suggests that the auditory cortex doesn’t “listen” to speech, but instead matches the input to predictions made from context; the closer the prediction matches the input, the less error checking there is, and consequently the less activation of the auditory cortex there is. The role of the auditory cortex, therefore, is to confirm or deny internal predictions about the identity of sounds. When predictions originating from PVF-SP (posterior ventral frontal regions for speech perception) regions are accurate, no error signal is generated in the auditory cortex and so less processing is required. More accurate predictions could be generated from verbal and non-verbal context (indeed, Skipper argues that verbal and non-verbal is a false distinction), resulting in less error signal, and therefore less metabolic expenditure (suggesting a metabolic conservation basis for the existence of the predictive model).
It’s interesting, and definitely plausible, but I think he goes too far. He throws the baby out with the bathwater when arguing against the necessity of traditional linguistic units; just because context (rather than specifically phonemes, syllables, etc.) seems to be the basis for predictions and error checking, that doesn’t mean that well-attested traditional linguistic units aren’t important or aren’t there. Indeed, if they’re not important, why are they there, and why are they so consistently distinctive?
Linguistic reservations aside, this is one of the most interesting ideas I’ve read this year.
4. Perniss and Vigliocco (2014). “The Bridge of Iconicity: From a World of Experience to the Experience of Language.” Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1651: 20130300. doi:10.1098/rstb.2013.0300.
(open access paper available here)
Another paper from the special edition of Phil.Trans.Royal Society B on language as a multimodal phenomenon. I like how the three functions of iconicity are made clear here: displacement, referentiality, and embodiment. I also like how an attempt is made at categorising and more precisely defining iconicity, as pinning it down precisely has been quite tricky and different researchers use different terms in different ways. Their definition of iconicity has undergone a (welcome) narrowing compared to their definition in Perniss et al. (2010); they now equate it directly to sound-symbolism (which I’m not sure I fully agree with), and define it as “putatively universal as well as language-specific mappings between given sounds and properties of referents”. This version of iconicity does not include systematicity, or any “non-arbitrary mappings achieved simply through regularity or systematicity of mappings between phonology and meaning”. I’m neutral on this. Certainly, statistical sound-symbolism is different from sensory sound-symbolism, but where do we draw the line between conventionalised language-specific sound-symbolism and statistical sound-symbolism? How is it possible to differentiate them, given that language-specific sound-symbolism will also be statistically overrepresented with certain concepts? Moreover, what are phonaesthemes now? Can you distinguish between statistical phonaesthemes and sensory phonaesthemes which are also very common? This paper goes further than most in terms of categorising and defining the casserole of concepts related to iconicity and it defines the state and purpose of iconicity very well.
5. Shin and Kim (2014). “Both ‘나’ and ‘な’ Are Yellow: Cross-Linguistic Investigation in Search of the Determinants of Synesthetic Color.” Neuropsychologia. doi:10.1016/j.neuropsychologia.2014.09.032.
(no free .pdf available)
This is a study of four trilingual Korean-Japanese-English speakers who also have grapheme-colour synaesthesia (which wins the award of “most niche participant group of 2014” for me). They found that all four of them had broadly similar colours for the same characters across languages, and that the effect was more strongly driven by sound rather than the visual features of the characters. This means that grapheme-colour synaesthesia seems to be driven by the sounds of the graphemes more than their shapes. This is rather an exciting find, because it hints that a previously non-linguistic phenomenon may well be rooted in language, and this may have interesting implications for the processing of cross-modal correspondences in language in non-synaesthetes too.