Nijmegen Lectures 2016
by David Poeppel
The lectures discuss recent experimental studies that focus on general questions about the cognitive science and neural implementation of speech and language. On the basis of the empirical findings, I reach (currently) unpopular conclusions, namely that speech is special (not just ‘mere hearing'), that language is structured (not just ‘mere statistics’), and that linguistic theorizing of an appropriately abstract computational form will underpin proper explanation.
Lecture 1: On how speech is pretty special
In this presentation, I consider the notion of specialization for sounds, and especially speech. Speech contains temporal structure that the brain must analyze to enable linguistic processing. To investigate the neural basis of this analysis, we used sound quilts — stimuli constructed by shuffling segments of a natural sound, approximately preserving its properties at short timescales while disrupting them at longer scales. We generated quilts from foreign speech, to eliminate language cues, and we manipulated the extent of natural acoustic structure by varying the segment length. Using fMRI, we identified bilateral regions of the superior temporal sulcus (STS) whose responses varied with segment length. This effect was absent in primary auditory cortex and did not occur for quilts made from other natural sounds, or acoustically-matched synthetic sounds, suggesting tuning to speech-specific spectrotemporal structure. When examined parametrically, the STS response increased with segment length up to ~500 ms. The results identify a locus of speech analysis in human auditory cortex, distinct from lexical, semantic, or syntactic processes.
Discussants
Elia Formisano, Maastricht University
Barbara Tillmann, Lyon Neuroscience Research Center
Recommended reading
Arnal LH., Flinker A., Kleinschmidt A., Giraud AL. & Poeppel D. (2015) Human Screams Occupy a Privileged Niche in the Communication Soundscape. Current Biology 25, 1–6.
http://www.cell.com/current-biology/abstract/S0960-9822%2815%2900737-X
Poeppel, D. (2001). Pure word deafness and the bilateral processing of the speech code. Cognitive Science 21: 679-693. http://onlinelibrary.wiley.com/doi/10.1207/s15516709cog2505_3/abstract
Overath T., McDermott JH., Zarate JM., & Poeppel D. (2015) The cortical analysis of speech-specific temporal structure revealed by responses to sound quilts, Nature Neuroscience, 18(6):903-911. http://www.nature.com/neuro/journal/v18/n6/full/nn.4021.html
Lecture 2: On the sufficiency of abstract structure
The most critical attribute of human language is its unbounded combinatorial nature: smaller elements can be combined into larger structures based on a grammatical system, resulting in a hierarchy of linguistic units, e.g., words, phrases, and sentences. Mentally parsing and representing such structures, however, poses challenges for speech comprehension. In speech, hierarchical linguistic structures do not have boundaries clearly defined by acoustic cues and must therefore be internally and incrementally constructed during comprehension. Previous studies have suggested that the cortical activity is synchronized to acoustic features of speech, approximately at the syllabic rate, providing an initial time scale for speech processing. But how the brain utilizes such syllabic-level phonological representations closely aligned with the physical input to build multiple levels of abstract linguistic structure, and represent these concurrently, is not known. On the basis of MEG experimentation, I demonstrate that during listening to connected speech, cortical activity of different time scales concurrently tracks the time course of abstract linguistic structures at different hierarchical levels, e.g. words, phrases, and sentences. Critically, the oscillatory neural tracking of hierarchical linguistic structures is dissociated from the encoding of acoustic cues as well as from the predictability of incoming words. The results suggest that a hierarchy of neural processing timescales underlies grammar-based internal construction of hierarchical linguistic structure.
Discussants
Usha Goswami, University of Cambridge
Ole Jensen, Radboud University, Nijmegen
Recommended reading
Luo, H. and Poeppel, D. (2007). Phase Patterns of Neuronal Responses Reliably Discriminate Speech in Human Auditory Cortex, Neuron. 54, 1001-1010. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2703451/
Giraud, AL & Poeppel, D. (2012). Cortical oscillations and speech processing: emerging computational principles and operations. http://www.nature.com/neuro/journal/v15/n4/abs/nn.3063.html
Ding N, Melloni L, Tian X, Zhang H, Poeppel D. (2015). Cortical tracking of hierarchical linguistic structures in connected speech. http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4186.htm
Lecture 3: On the insufficiency of correlational cognitive neuroscience
We consider here two inter-related problems that current research attempts to —or should attempt to— solve. The first challenge concerns how to develop a theoretically well-motivated and biologically sophisticated functional anatomy of the language processing system. This ""maps problem” is by and large a practical issue. Much as is true for vision, language research needs fine-grained maps of the regions that underpin the domain; which techniques can be harnessed to build an articulated model (in light of having no animal models) remains difficult. The second, closely related challenge concerns the ""parts list"" (or the set of primitives; or the ontology) for language actually under consideration. Coarse conceptions (such as the original “production"" versus “comprehension"") are completely insufficient and incoherent. Current ideas, such as phonology versus syntax versus semantics are also unlikely to provide a plausible link to neurobiological infrastructure. This ""mapping problem” constitutes a more difficult, principled challenge: what is the appropriate level of analysis and granularity that allows us to map between (or align) the biological hardware and the computational requirements of language processing? The first challenge, the maps problem, addresses how to break down linguistic computation in space. The second challenge, the mapping problem, addresses how to break down language function into computational primitives suitable for neurobiology. If these problems are not explicitly tackled, our answers to ‘brain and language’ may remain correlational, not mechanistic and explanatory.
Discussants
Peter Hagoort, Max Planck Institute for Psycholinguistics, Nijmegen
Norbert Hornstein, University of Maryland
Recommended reading
Poeppel, D. and Embick, D. (2005). The relation between linguistics and neuroscience. In A. Cutler (ed.), Twenty-First Century Psycholinguistics: Four Cornerstones. Lawrence Erlbaum.
Poeppel, D. (2012). The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language. Cogn Neuropsychol, 29(1-2):34-55. http://www.ncbi.nlm.nih.gov/pubmed/23017085
Embick D. & Poeppel D. (2014) Towards a computational(ist) neurobiology of language: correlational, integrated and explanatory neurolinguistics. Language, Cognition and Neuroscience, DOI: 10.1080/23273798.2014.980750.
http://www.tandfonline.com/doi/abs/10.1080/23273798.2014.980750
Organizers
Peter Hagoort
Anne Kosem
Tineke Snijders
Location
Radboud University Nijmegen, Aula, Comeniuslaan 2, Nijmegen
Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen
Share this page