R, Uncategorized

Visualising football league tables

I was looking at the Premiership league table today, and it looks like this:

current league table

It’s pretty informative; we can see that Leicester are top, Aston Villa are bottom, and that the rest of the teams are somewhere in between. If we look at the points column on the far right, we can also see how close things are; Villa are stranded at the bottom and definitely going down, Leicester are five points clear, and there’s a close battle for the final Champions League spot between Manchester City, West Ham, and Manchester United, who are only separated by a single point.

Thing is, that requires reading the points column closely. If you take the league table as a simple visual guide, it doesn’t show the distribution of teams throughout the league very well. If you say that Stoke are 8th, that sounds like a solid mid-table season… but what it doesn’t tell you is that Stoke are as close to 4th place and the Champions League as they are to 10th place, which is also solid mid-table. A more visually honest league table would look something a little like this*:

current league table dragged about a bit

*definitely not to scale.

Screen-shotting a webpage and dragging things about in MS Paint isn’t the best way to go about this, so I’ve scraped the data and had a look at plotting it in R instead.

Firstly, let’s plot each team as a coloured dot, equally spaced apart in the way that the league table shows them:

League position right now

(colour-coding here is automatic; I tried giving each point the team home shirt colours, but just ended up with loads of red, blue, and white dots, which was actually a lot worse)

Now, let’s compare that with the distribution of points to show how the league positions are distributed. Here, I’ve jittered them slightly so that teams with equal points (West Ham and Manchester United in 5th and 6th, Everton and Bournemouth in 12th and 13th) don’t overlap:

League points right now

This is far more informative. It shows just how doomed Aston Villa are, and shows that there’s barely any difference between 10th and 15th. It also shows that the fight for survival is between Norwich, Sunderland, and Newcastle, who are all placed closely together.

Since the information is out there, it’d also be interesting to see how this applies to league position over time. Sadly, Premiership matches aren’t all played at 3pm on Saturday anymore, they’re staggered over several days. This means that the league table will change every couple of days, which is far too much to plot over most of a season. So, I wrote a webscraper to get the league tables every Monday between the start of the season and now, which roughly corresponds to a full round of matches.

Let’s start with looking at league position:

League position over time

This looks more like a nightmare tube map than an informative league table, but there are a few things we can pick out. Obviously, there’s how useless Aston Villa have been, rooted to the bottom since the end of October. We can also see the steady rise of Tottenham, in a dashing shade of lavender, working their way up from 8th in the middle of October to 2nd now. Chelsea’s recovery from flirting with relegation in December to being secure in mid-table now is fairly clear, while we can also see how Crystal Palace have done the reverse, plummeting from 5th at the end of the year to 16th now.

An alternative way of visualising how well teams do over time is by plotting their total number of points over time:

League points over time

This is visually more satisfying than looking at league position over time, as we can see how the clusters of teams in similar positions have formed. Aston Villa have been bottom since October, but they were at least relatively close to Sunderland even at the end of December. Since then, though, the gap between bottom and 19th as opened up to nine points. We can also see how Leicester and Arsenal were neck and neck in first and second for most of the season, but the moment when Leicester really roared ahead was in mid-February. Finally, the relegation fight again looks like it’s a competition between Norwich, Sunderland, and Newcastle for 17th; despite Crystal Palace’s slump, the difference between 16th and 17th is one of the biggest differences between consecutive positions in the league. This is because Norwich, Sunderland, and Newcastle haven’t won many points recently, whereas Swansea and Bournemouth, who were 16th and 15th and also close to the relegation zone back in February, have both had winning streaks in the last month.

One of the drawbacks with plotting points over time is that, for most of the early part of the season, teams are so close together that you can’t really see the clusters and trends.

So, we can also calculate a ratio of how many points a team has compared to the top and bottom team at any given week. To do this, I calculated the points difference between top and bottom teams each week, and then calculated every team’s points as a proportion of where they are.

For example, right now, Leicester have 66 points and Aston Villa have 16. That’s a nice round difference of 50 points across the whole league. Let’s express that points difference on a scale of 0 to 1, where Aston Villa are at one extreme end at 0 and Leicester are at the other extreme end at 1.

Tottenham, in 2nd, have 61 points, or five points fewer than Leicester and 45 points more than Aston Villa. This means that, proportionally, they’re 90% along the points difference spectrum. This means they get a relative position of 0.9, as shown below:

Relative league position over time

This is a lot more complicated, and perhaps needlessly so. It reminds me more of stock market data than a football league table. I plotted it this way to be able to show how close or far teams were from each other in the early parts of the season, but even then, the lines are messy and all over the place until about the start of October, when the main trends start to show. One thing that means is that however badly your team are doing in terms of points and position, there’s little use in sacking a manager before about November; there’s not enough data, and teams are too close together, to show whether it’s a minor blip or a terminal decline. Of course, if your team are doing badly in terms of points and position and playing like they’ve never seen a football before, then there’s a definite problem.

To make it really fancy/silly (delete as appropriate), I’ve plotted form guides of relative league position over time. Instead of joining each individual dot each week as above, it smooths over data points to create an average trajectory. At this point, labelling the relative position is meaningless as it isn’t designed to be read off precisely, but instead provides an overall guide to how well teams are doing:

Relative league position over time smooth narrative (span 0.5)

Here, the narratives of each team’s season are more obvious. Aston Villa started out okay, but sank like a stone after a couple of months. Sunderland were fairly awful for a fairly long time, but the upswing started with Sam Allardyce’s appointment in October and they’ve done well to haul themselves up and into contention for 17th. Arsenal had a poor start to the season, then shot up, rapidly to first near the end of the year, but then they did an Arsenal and got progressively worse from about January onwards. Still, their nosedive isn’t as bad as Manchester City’s; after being top for the first couple of months, they’ve drifted further and further down. It’s more pronounced since Pep Guardiola was announced as their next manager in February, but they were quietly in decline for a while before that anyway. Finally, looking at Chelsea’s narrative line is interesting. While they’ve improved since Guus Hiddink took over, their league position improvement is far more to do with other teams declining over the last couple of months. Four teams (Crystal Palace, Everton, Watford, and West Brom) have crossed Chelsea’s narrative line since February.

I don’t expect these graphs to catch on instead of league tables, but I definitely find them useful for visualising how well teams are doing in comparison to each other, rather than just looking at their position.

Science in general, Sound-symbolism, Uncategorized

Sound-symbolism boosts novel word learning: the MS Paint version

I have a new article out!

Gwilym Lockwood, Mark Dingemanse, and Peter Hagoort. 2016. “Sound-Symbolism Boosts Novel Word Learning.” Journal of Experimental Psychology: Learning, Memory, and Cognition. doi:10.1037/xlm0000235 (download link, go on, it’s only eight pages)

and I’m particularly proud of this one because:

a) it’s a full article discussing some of the stats I’ve been talking about at conferences for almost two years, and

b) it’s probably the only scientific article to formally cite Professor Oak’s Pokédex.

So, if you like things like iconicity and logit mixed models and flawed experiments cunningly disguised as pre-tests that I meant to do all along, you can read it here.

Enough of that, though. I know that what you’re really here for is Sound-symbolism boosts novel word learning: the MS Paint version.

The first thing we did was to select our words from almost a hundred ideophones and arbitrary adjectives. Participants heard the Japanese word, then saw two possible translations – one real, one opposite – and they had to guess which the correct one was. This was pretty easy for the ideophone task. People can generally guess the correct meaning with some certainty, because it just kind of sounds right for one of the options (due to the cross-modal correspondences between the sound of the word and its sensory meaning). It was a fair bit harder for the arbitrary adjectives, where there are no giveaways in the sound of the word.

2AFC stimuli selection

It’s kind of taken for granted in the literature that people can guess the meanings of ideophones at above chance accuracy in a 2AFC test, but I’ve always struggled to find a body of research which shows this. This pre-test shows that people can indeed guess ideophones at above chance accuracy in a 2AFC test – at 63.1% accuracy (μ=50%, p<0.001) across 95 ideophones, in fact. So, now, anybody who wants to make that claim has the stats to do so. Nice. We’re now rerunning this online with thousands of people as part of the Groot Nationaal Onderzoek project, so stay tuned for more on that.

Then, two different groups did a learning task. We originally had the learning task as a 2AFC set up where participants learned by guessing and then getting feedback. In terms of results, this did work… but about a third of the participants realised that they could “learn” by ignoring the Japanese words completely and just remembering to pick fat when they saw the options fat and thin. Damn.

2AFC failed test

Anyway. We got two more groups in to do separate learning and test rounds with a much better design. One group got all the ideophones, half with their real meanings, half with their opposite meanings. The other group got all the arbitrary adjectives, half with their real meanings, half with their opposite meanings.

In the same way that it’s easy to guess the meanings of the ideophones, we predicted that the ideophones with their real translations would be easy to learn because of the cross-modal correspondences between linguistic sound and sensory meaning…

concept sounds participants real trimmed

…that the ideophones with their opposite translations would be hard to learn, because the sounds and meanings clash rather than match…

concept sounds participants opposite trimmed

…and that there wouldn’t be much difference between conditions for the arbitrary adjectives, because there’s no real association between sound and meaning in arbitrary words anyway.

concept sounds participants arbitrary trimmed

And sure enough, that’s exactly what we found. Participants were right 86.1% of the time for ideophones in the real condition, but only 71.1% for ideophones in the opposite condition. With the arbitrary adjectives, it was 79.1% versus 77%, which isn’t a proper difference.

Additional bonus for replication fans! (that’s everybody, right?): in a follow-up EEG experiment doing this exact same task with Japanese ideophones, another 29 participants got basically the same results (86.7% for the real condition, 71.3% for the opposite condition). That’s going to be submitted in the next couple of weeks.

Here’s the histogram from the paper… but in glorious technicolour:

accuracy for each condition with both experiments (colour) updated

(It would have cost us $900 to put one colour figure in the article, even though it’s the publisher who’s printing it and making money from it. The whole situation is quite silly.)

The point of this study is that it’s easier to learn words that sound like what they mean than words that don’t sound like what they mean, and that words that don’t particularly sound like anything are somewhere in the middle. This seems fairly obvious, but people have assumed for a long time that this doesn’t really happen. There’s been a fair bit of research about onomatopoeia and ideophones helping babies learn their first language, but not that much yet about studies with adults. It also provides some support for the broader suggestion that we use similar sounds to talk about and understand sensory things across languages, but not so much for other things, so words with sound-symbolism may well have been how language started out in the first place.

I’d love to re-run this study on a more informal (and probably unethical) basis where a class of school students learning Japanese are given a week to learn the same word list for a vocab test where they’d have to write down the Japanese words on a piece of paper. I reckon that there’d be the same kind of difference between conditions, but it’d be nice to see that happen when they really have to learn the words to produce a week later, not just recognise a few minutes later. If anybody wants to offer me a teaching position at a high school where I can try this out and probably upset lots of parents, get in touch; I need a job when my PhD contract runs out in August.

The thing I find funniest about this entire study is that when I was studying Japanese during my undergrad degree, I found ideophones really difficult to learn. I thought they all sounded kind of the same, and pretty daft to boot. The ideophone for “exciting/excited” is wakuwaku, which I felt so uncomfortable saying that I feigned indifference about things in oral exams to avoid saying it (but to be fair, feigned indifference was my approach to most things in my late teens and early twenties). There’s probably an ideophone to express the internal psychological conflict you get when you realise you’re doing a PhD in something you always tried to ignore during your undergrad degree, but I’m not sure what it is. I’ll bet my old Japanese lecturers would be pretty niyaniya if they knew, though.

Open Access, Open Data, Open Education, Uncategorized

On Open ideology

I’ve spent a while trying to find the name of an eponymous adage recently. You know, like Poe’s Law —that extremist views and satire are often indistinguishable without an overt indicator otherwise— or Betteridge’s law —that any headline that ends in a question mark can be answered by the word no.

What I’m looking for is:

the smaller the difference between your worldview and another’s, the more you fixate on that small difference

For example: my political and social views are closest to the editorial line taken by The Guardian, but The Guardian makes me irate in a way that The Telegraph doesn’t (and this isn’t just because of The Grauniad’s anything-goes approach to spelling either).

Whatever it’s called, this adage in action looks a bit like this:

compromise flags fuck you

This is a fairly long way of bringing up OpenCon 2015 in Brussels a couple of weeks ago. OpenCon is an annual conference about furthering Open Access, Open Data, and Open Education… but it’s also wider than that, and also hard to define, because problems with Open Access, Open Data, and Open Education directly and indirectly lead to most problems in science in general (I can’t speak for the humanities, but it’s probably the same there). There’s a ton of literature out there on why openness is needed, so I won’t go into that here, but long story short: science is messed up, lots of people agree on this, and change isn’t happening fast enough.

It was an excellent conference full of excellent people doing excellent things, and I left feeling hopeful that we just might get these problems sorted out. Various people have blogged about the many, many positives already (e.g. here, here, and here, and there’ll be others out there), so I’m writing this blog as a note of caution.

OpenCon felt ideological. It was invigorating. It was like being back in undergrad, surrounded by strong ideas and forceful debate.

I’d say that about 95% of OpenCon attendees agreed on about 95% of things. Naturally, this meant that debate tended to centre around the bits where people didn’t agree, and when talking about ideas, this is great.

But the thing about ideology is that it rarely reflects the world at large.

The shitty MS Paint figure is obviously a massive exaggeration, but I am concerned that this is where we’ll end up — fixating on the small differences and not getting things done. I’m concerned that it’s like the late 1800s in Russia, and that we’ll end up like the Russian revolutionaries. In 1903, the Mensheviks and the Bolsheviks split over small, party-internal matters, which meant that Elsevier the Romanovs could continue abusing their power for several years without a coherent opposition… and when the inevitable revolution did happen, there were so many factions that it took a dictatorship to hold them together.

For the record, I’m an Open Menshevik. All the tools are out there already. Sure, the infrastructure isn’t the best, but it is workable. All it really needs is wider, much wider, uptake and everything else will gradually follow… which means moving away from the ideological things and back onto the practicalities of everything we already agree on.

venn diagram

Of course, let’s keep talking about the ideology of Open. It’s important to know where we’re going. But I feel that a long(er) view is needed.

The debate about the merits of Green vs. Gold OA doesn’t really matter if people outside OpenCon aren’t doing it that much in the first place; the debate about APCs for OA journals doesn’t really matter if people outside OpenCon aren’t publishing in OA journals because they still (mistakenly) think they’re a bit shit; the debate about making things machine-readable doesn’t really matter if most data isn’t made available in the first place.

Some of the best talks and workshops I saw were about teaching people how to use the existing infrastructure in Open ways; data archiving, green post-print archiving, making convincing pro-OA arguments to people who don’t know that much about it. We all agree that this is A Good Thing, but sometimes I think we get ahead of ourselves, and forget that we need to keep doing more of this.

Bjorn Brembs said in his talk that we are perhaps a little self-congratulatory sometimes, and while a lot of what people are doing really does deserve recognition and congratulation, I think there’s a lot more groundwork to be laid before we can start thinking about the ideological stuff in a practical way.

Hopefully there’ll be more groundwork laid by the time OpenCon 2016 rolls around, and more still each year, until the Open revolution is not just inevitable but successful.


(almost) everything you ever wanted to know about sound-symbolism research but were too afraid to ask.

Publications are like buses. Not because you spend most of your PhD with no publications then two turn up at once (although that is what’s just happened to me), but because you might get overtaken by another bus going the same way, and you might want to be somewhere else by the time you get to your original destination.

The bus I’ve just taken is my new review paper:

Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: a review of behavioral, developmental, and neuroimaging research into sound-symbolism. Language Sciences, 1246. http://doi.org/10.3389/fpsyg.2015.01246

I wrote it along with Mark Dingemanse, my supervisor at the Max Planck Institute. It covers experimental research on sound-symbolism from the last few years and pulls together the main themes and findings so far. To summarise, these are:

  1. That large vowels (e.g. a, o) are associated with large things and slow things and dark things and heavy things
  2. That small vowels (e.g. i, e) are associated with small things and fast things and bright things and light things
  3. That voiced consonants (e.g. b, g) have the same kind of associations as large vowels
  4. That voiceless consonants (e.g. p, k) have the same kind of associations as small vowels
  5. That this is probably due to a combination of acoustic properties (i.e. the way something sounds when you hear it) and articulatory properties (i.e. the way something feels when you say it)
  6. That these cross-modal associations mean people can guess the meanings of sound-symbolic words in languages that they don’t know
  7. That these cross-modal associations mean children and adults learn sound-symbolic words more easily
  8. That these cross-modal associations in sound-symbolic words elicit either different brain processes from regular words and/or stronger versions of the same brain processes as regular words
  9. That it’s more informative to investigate these cross-modal associations using real sound-symbolic words from real languages than using non-words from made-up languages
  10. That it’s more informative to investigate these cross-modal associations using complicated experiment tasks than asking participants to choose between two options
  11. That it’s not accurate to look at arbitrariness and iconicity are two competitors in a zero-sum language game, even if it does make our work seem more important

We’re pretty happy with this, and the paper is a nice one-stop shop for everything you’ve ever wanted to know about sound-symbolism research but were too afraid to ask. We don’t finish it off with a grand model of how it works, because we don’t really know (and because I’ve still got at least two more experiments to do in my PhD before I’ll have a decent idea), but we do collect a lot of individual strands of research into a few coherent themes which should be useful for anybody else who’s doing similar stuff.

Even though it’s hot off the press this morning, it’s taken a long time to get to this stage. I started doing all the reading and the writing in spring 2014, then Mark and I restructured it quite a lot, and then it got put on the back burner while I read more things and did more things. We came back to it at the start of this year, added and changed a few things, and submitted it earlier this summer. After a fairly quick and painless review process, it’s now out.

The first frustration is that there was a small but important misprint in the text; it’s frustrating that it’s there, it’s frustrating that it slipped past the two authors, two reviewers, and editor, and it’s frustrating that Frontiers won’t amend it (despite being an online-only journal). In this misprint, we accidentally misreport Moos et al. (2014). They found that people associate the vowel [a] with the colour red, and that this colour association becomes more yellow/green as the vowel gets smaller (like the vowel [i]). However, we wrote this the wrong way round in the text and accompanying figure. So, here’s the correct version of Figure 1 from the review paper:

cross-modal mappings - vowel space (bw) for distribution


Secondly, since submitting the article and having the positive reviews back, I’ve come across two studies in particular which I wish we could have included but couldn’t because we were already on that bus. These studies are:

Sidhu, D. M., & Pexman, P. M. (2015). What’s in a Name? Sound Symbolism and Gender in First Names. PloS One, 10(5), e0126809. http://doi.org/10.1371/journal.pone.0126809(which starts and ends with the Shakespeare quote about roses by different names smelling as sweet to describe arbitrariness and iconicity, which is a quote I’ve always wanted to use myself, so good on them)

Jones, M., Vinson, D., Clostre, N., Zhu, A. L., Santiago, J., & Vigliocco, G. (2014). The bouba effect: sound-shape iconicity in iterated and implicit learning. In Proceedings of the 36th Annual Meeting of the Cognitive Science Society. (pp. 2459–2464). Québec.(which I’d seen referred to in various presentations as a work in progress, but I hadn’t come across the actual, citable CogSci conference paper until a couple of weeks ago)

Both these studies investigate the kiki/bouba effect, which is the way people associate spiky shapes with spiky sounds (i.e. small vowels and voiceless consonants) and round shapes with round sounds (i.e. rounded vowels like o and voiced consonants). Both studies have well-designed methods which are quite complicated to explain but address the questions really well, and find similar things. The original kiki/bouba studies found the split between round and spiky from making people choose between two options, and so people chose round shapes with round sounds and spiky shapes with spiky sounds. Simple enough.

However, these two studies show that roundness and spikiness don’t contribute equally to the effect. Rather, there’s a massive effect of roundness, while the associations between spiky sounds and spikiness is much less strong, and may even just be an association by default because it was the other option in the original studies.I’d then have included another paragraph or two in the review paper about how future studies can and should address whether the associations outlined in points 1-4 fall along an even continuum (in the way that size associations seem to fall evenly between i and a) or whether one particular feature is driving the effect (in the way that roundness drives the round/spiky non-continuum). Sadly, I only came across these studies after it was too late to include them, but hopefully they’ll be picked up on by others in future!