Music and language both involve systematic patterns of timing, accent and grouping (rhythm) and structured patterns of pitch (melody). Yet musical rhythm and melody seem very different from the rhythm and melody of ordinary speech (i.e., speech prosody). Indeed, some widespread aspects of music, such as the use of a periodic beat and discrete pitch intervals, have no counterpart in ordinary speech. Nevertheless, empirical research suggests that certain aspects of the structure and processing of rhythm and melody are shared by music and language. For example, the rhythms of a culture’s instrumental music can reflect the prosody of its language, and the rhythm of one’s native language can influence how one hears basic rhythmic patterns in nonlinguistic contexts. Furthermore, musical training appears to improve the brain’s sensory encoding of pitch and timing patterns in speech. This lecture explores connections between linguistic and musical rhythm and melody and their implications for language learning and remediation.
Usha Goswami, Centre of Neuroscience in Education, Cambridge, UK
Henkjan Honing, University of Amsterdam. Amsterdam, The Netherlands
Lawrence Parsons, The University of Sheffield, Sheffield, UK
Patel, A.D. & Daniele, J.R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87:B35-B45.
Patel, A.D., & Iversen, J.R. (2007). The linguistic benefits of musical abilities. Trends in Cognitive Sciences, 11: 369-372.
Iversen, J.R., Patel, A.D., & Ohgushi, K. (2008). Perception of rhythmic grouping depends on auditory experience. Journal of the Acoustical Society of America, 124: 2263-2271.
Liu, F., Patel, A.D., Fourcin, A., & Stewart, L. (2010). Intonation processing in congenital amusia: Discrimination, identification, and imitation. Brain, 133:1682-1693.
In addition, chapter 3 is relevant in: Patel, A.D. (2008). Music, Language, and the Brain. NY: Oxford University Press.
Music and language both employ sequences with rich hierarchical structure, built from perceptually discrete elements combined in principled ways. That is, both are syntactic systems. However, instrumental music, unlike language, does not convey semantic propositions, and it has been argued that the hierarchical structures that organize tones vs. words are quite different. Furthermore, there are clear cases of dissociation between the processing of musical and linguistic structure following brain damage. Thus how are we to make sense of a growing body of evidence from neuroimaging that points to overlap in music and language syntactic processing? This talk will describe a theoretical framework for reconciling these observations and for guiding future comparative work on musical and linguistic syntax. This “resource-sharing framework” posits a fundamental distinction between domain-specific representations and non domain-specific processing mechanisms. While this framework was developed in the context of research on syntactic processing, this lecture will examine how the framework might be applied to study relationships between linguistic and musical meaning, focusing on the processing of discourse coherence in language and music.
Barbara Tillmann, CNRS - Université Claude Bernard Lyon, Lyon, France
Eric Clarke, University of Oxford, Oxford, UK
Patel A.D. (2003). Language, music, syntax and the brain. Nature Neuroscience 6(7):674-681.
Patel, A.D., Iversen, J.R., Wassenaar, M., & Hagoort, P. (2008). Musical syntactic processing in agrammatic Broca’s aphasia. Aphasiology, 22: 776-789.
Fedorenko, E., Patel, A.D., Casasanto, D., Winawer, J., & Gibson, E.(2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37: 1-9.
In addition, chapters 5 and 6 are relevant in: Patel, A.D. (2008). Music, Language, and the Brain. NY: Oxford University Press.
This lecture offers a novel theoretical perspective on the evolution of music. At present, a number of adaptationist theories posit that the human capacity for music is a product of natural selection, reflecting the survival value of musical behaviors in our species’ past. In sharp contrast, a prominent nonadaptationist theory of music, that of Steven Pinker, argues that music is a human invention and is biologically useless. I argue that research on music and the brain supports neither of these views. Contrary to adaptationist theories, neuroscientific research suggests that the existence of music can be explained without invoking any evolutionary-based brain specialization for musical abilities. And contrary to Pinker’s claim, neuroscience research suggests that music can be biologically powerful. By biologically powerful, I mean that musical behaviors (e.g., playing, listening) can have lasting effects on nonmusical brain functions, such as language and attention, within individual lifetimes. Music is thus theorized to be a biologically powerful human invention, or “transformative technology of the mind".
Eckart Altenmüller, The Institute of Music Physiology and Musicians' Medicine, The Hanover University, Hanover, Germany
Michael Dunn, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
Katie Overy, The University of Edinburgh, Edinburgh, UK
Patel, A.D. (2010). Music, biological evolution, and the brain. In: M.Bailar, (Ed.). Emerging Disciplines. Houston, TX: Rice University Press (pp. 91-144).
Patel, A.D., Iversen, J.R., Bregman, M.R., & Schulz, I. (2009). Experimental evidence for synchronization to a musical beat in a nonhuman animal. Current Biology, 19: 827-830.
Patel, A.D. (2008). Music, Language, and the Brain. NY: Oxford University Press.
In addition, chapter 7 is relevant in: Patel, A.D. (2008). Music, Language, and the Brain. NY: Oxford University Press.
Radboud University Nijmegen, Aula, Comeniuslaan 2, Nijmegen
Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen