Schedule of Events | Search Abstracts | Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions

Poster B115

Decoupling speech acoustics and linguistic representations

Poster Session B - Sunday, April 14, 2024, 8:00 – 10:00 am EDT, Sheraton Hall ABC
Also presenting in Data Blitz Session 2 - Saturday, April 13, 2024, 1:00 – 2:30 pm EDT, Ballroom Center.

Alessandro Tavano1 (alessandro.tavano.office@gmail.com), Cosimo Iaia1; 1Goethe University Frankfurt

Speech tracking is assumed to rely on the temporal alignment of neural oscillations to low-frequency (<10 Hz), quasi-rhythmic amplitude modulations of the speech sound carrier between 3 and 5 Hz, supporting the hypothesis that humans are endowed with a neural oscillator tuned to narrow-band speech rhythmic fluctuations. Recently, it has been proposed that also higher order linguistic units, such as syntactic phrases, segregate to narrow-band acoustic rhythms, so that dedicated neural oscillators would capture their temporal distribution. Two rarely discussed, but almost invariably implicit assumptions ground such hypothesis: 1) Speech and language units are unimodally distributed; 2) The overlap in duration across the hierarchy of speech and language units is negligible. We modelled the temporal distribution of multiple speech and syntactic units extracted from two audiobook chapters, and show that: 1) key speech units such as words and sentences, and syntactic units such as Noun Phrases are bimodally distributed: a neural oscillator would not capture their variance profile; 2) Speech units and syntactic categories largely overlap in time, making it effectively impossible to temporally segregate linguistic representations. We conclude that the temporal dimension of speech acoustics vastly underspecifies the time scales of speech and syntactic processing. To test such conclusion, we run a time-resolved mutual information analysis of EEG data recorded from 23 participants listening to the audiobook chapters, and demonstrate that information extraction using vectors based on hierarchical linguistic annotations outperforms information analyses using vectors based on speech acoustics.

Topic Area: LANGUAGE: Syntax

 

CNS Account Login

CNS2024-Logo_FNL-02

April 13–16  |  2024