On Tuesday this week, I had the great pleasure of giving this month’s Pint of Science talk at Bridie Molloy’s in St. John’s! I had first hear of Pint of Science while in the UK but had never managed to go to an event because they always ended up sold out before I could get a ticket! Thankfully, this was not an issue here in St. John’s and after attending one in September (talks happen monthly here), I asked if it would be possible to give a talk myself, thinking that music science might be a topic of interest for the general public.
I was absolutely thrilled (though nervous on the spot!) at the great turnout to my talk – thanks to everyone who came out to hear me speak a little bit about the work that goes on in music perception and that went on through my PhD. I was also super happy to chat with Fred and Krissy from CBC’s St. John’s Morning Show on Monday morning and Paul Émile d’Entremont for Radio-Canada which aired on Tuesday (find 7:50 on Tuesday Nov 27) and yesterday Atefeh, the organizer of Pint of Science MUN and myself spoke with Cameron Kilfoy from Kicker News for a print piece about PoS & a bit about my talk. So the media coverage has been awesome and I’m definitely looking forward to bringing music science to the community more as I keep working here at MUN. I think it’s so important for researchers to be able to share what we do with the general public, as in many cases, we are funded by public money and so this is why events like PoS are so important to support and participate in! In my field, it’s so much fun anyway because everyone can relate to music so there’s always an interesting conversation to be had!
So, to give a brief summary of what I talked about for those of you who couldn’t come out to the talk itself:
I wanted to give a little introduction to the scope for research in music science so I talked a little bit about a couple of very different but all music-based research angles. For example, how some people are digging into the question of why we have music in the first place. Every human culture we know of – ever – has music, so it must have some crucial role or it wouldn’t be so pervasive. I’d say the most convincing theory is that of social bonding – the act of making music together, or listening or moving to music together forms a connection that no other medium can achieve – and this has been crucial to the formation of human groups in sizes far larger than almost any other animal on the planet! In the world of health care, music is used with dementia patients as a stimulant to bring them out of catatonic states while it is used with Parkinson’s patients to help them walk more steadily. In the world of performance and creativity research, we can look at performers’ brains while they play music from a score and when they improvise and see how those two types of performances differ! In computational musicology, the basic idea is that if we can make a computer do something closely enough to how a human does it, then we’re understanding something about how the brain works – so we build computer models to find a melody, to predict what’s going to come next, to identify emotion, to find the beat, to separate audio into all the right instruments, etc. I say we here because I dipped into this world in my PhD, merging it with what I would say is my main sub-field: music perception, or how our brains process and make sense of music. This includes pitch, rhythm, harmony, timbre, phrasing, melody identification, and much more! I’m also really interested in how musicians’ perceptions are different from people who don’t have any formal musical training.
Clearly, there’s so much to talk about! I focused on musical expectations – the predictions we make for what’s going to come next based on what we’ve heard before. Now, why do we care about predictions in music? Good question! Prediction is adaptable – you want to be able to tell what’s likely to happen in the near future because if you’re walking in the woods and a bear goes from chill to growling, you want to run before it’s got its teeth in you! There’s a theory called predictive coding that says that the brain is all based on predictions – we learn patterns from the world, make models, make predictions, and update those models and predictions from what we see in the world. In a less dangerous context, predicting words in conversation helps fill in blanks in a noisy environment when you can’t hear every word super clearly. In music, there’s no danger at all and with only twelve different possible notes (if we ignore octaves), it’s a super controlled environment that is very highly patterned. Plus, it’s super widespread – I think you’d be hard pressed to find someone who had never heard music. So, we’re all exposed to it, it’s relatively restricted compared to the real world, and it’s safe – so it’s a really great tool to study prediction and expectations.
On that note, we did a little study where I played a tonic triad (first, third and fifth notes of the major scale) and then another note that the audience rated as ‘good’ or ‘bad’ with thumbs up or thumbs down – or thumbs middle (?) if the note was in between. This is something that everyone can do, no matter if they have formal musical training or not and it was clear from the audience’s reactions that we all know what sounds good and bad in a particular musical context. This is called the tonal hierarchy – notes in the tonic triad sounds really good, notes from the scale sound decent, but notes not even the scale sound very wrong (especially the tritone!). This knowledge reflects the fact that we’ve all learned the patterns in music and we can make predictions about what’s going to come next – generally we expect good notes and bad notes are surprising. It turns out we can imitate this learning and these predictions using a computer model – it’s called IDyOM (information dynamics of music; it’s free) and it has loads of applications in understanding how we perceive music, including melody, time, style, culture, emotion, complexity and now it’s being developed for harmony too. So, part of the idea here is that if we can explain lots of music perception with one idea – prediction – maybe this idea that prediction is how the brain works has some merit.
I also talked a little bit about how we can see expectations and surprise in the brain using EEG. When we compare the signal made by a surprising note to a signal made by an unsurprising note, we find a difference between the two brain signals that is proportional to how surprising that note (or harmony) is. For example, a study by Stefan Koelsch and colleagues showed that an N6 chord in the middle of a chord sequence created a smaller difference than an N6 chord at the end of a chord sequence. That’s because music generally ends on the tonic (Western tonal and popular music that is) and so not having that closure is a huge violation – on the other hand, N6 chords do exist in music so while they are somewhat surprising, they’re not that big of a violation of our internalized Western musical syntax.
I think that pretty much sums it up, it definitely went by much quicker than I originally thought, what with being in front of people making me talk faster and forgetting to say things here and there, but I think it was well enjoyed and there were some really interesting questions and discussion had afterwards – over a pint of course!