Poster E127, Monday, March 26, 2:30-4:30 pm, Exhibit Hall C
Statistical Learning and Gestalt-like Principles Predict Human Melodic Expectations
Aniruddh Patel1, Emily Morgan1, Allison Fogel1; 1Tufts University
Across cognitive domains, humans form expectations about upcoming events. What knowledge do listeners draw upon to form these expectations? Music provides a test case for the trade-off between rule-based versus statistical-learning-based accounts. We ask whether expectations about upcoming notes in melodies are driven by rule-like Gestalt principles (e.g. a preference for small intervals) or by learned statistical knowledge from previous experience. We compare Temperley’s (2008) Probabilistic Model of Melody Perception—which incorporates three Gestalt-like principles—with Pearce’s (2005) IDyOM model—which learns n-grams from a training corpus. We use multinomial logit modeling to compare these models’ ability to predict behavioral data in which participants hear melodic fragments and sing the note they expect to come next. Fragments were manipulated either to strongly suggest a particular continuation note or to not create strong expectations. We find that both the IDyOM and Temperley models contribute independently to predicting participant responses, but that IDyOM is a stronger predictor, indicating that melodic expectations are largely driven by learned statistical knowledge but also include a Gestalt-like component. We further note that both models perform better in cases where human data has high entropy (i.e. responses split among many notes) compared to cases where human data has low entropy (in particular, cases where the majority of participants sing the tonic—i.e. authentic cadences). We conclude that forming expectations purely on the musical surface is insufficient to capture listeners’ expectations, suggesting an important role for hierarchical harmonic knowledge not captured by either of the models considered here.
Topic Area: PERCEPTION & ACTION: Audition