Displaying 1 - 14 of 14
Hagoort, P. (2017). It is the facts, stupid. In J. Brockman, F. Van der Wa, & H. Corver (
Eds.), Wetenschappelijke parels: het belangrijkste wetenschappelijke nieuws volgens 193 'briljante geesten'. Amsterdam: Maven Press.
Hagoort, P. (2017). The neural basis for primary and acquired language skills. In E. Segers, & P. Van den Broek (
Eds.), Developmental Perspectives in Written Language and Literacy: In honor of Ludo Verhoeven (pp. 17-28). Amsterdam: Benjamins. doi:10.1075/z.206.02hag.
AbstractReading is a cultural invention that needs to recruit cortical infrastructure that was not designed for it (cultural recycling of cortical maps). In the case of reading both visual cortex and networks for speech processing are recruited. Here I discuss current views on the neurobiological underpinnings of spoken language that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca’s and Wernicke’s region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication. The consequences of this architecture for reading are discussed.
Baggio, G., Van Lambalgen, M., & Hagoort, P. (2012). Language, linguistics and cognition. In R. Kempson, T. Fernando, & N. Asher (
Eds.), Philosophy of linguistics (pp. 325-356). Amsterdam: North Holland.
AbstractThis chapter provides a partial overview of some currently debated issues in the cognitive science of language. We distinguish two families of problems, which we refer to as ‘language and cognition’ and ‘linguistics and cognition’. Under the first heading we present and discuss the hypothesis that language, in particular the semantics of tense and aspect, is grounded in the planning system. We emphasize the role of non-monotonic inference during language comprehension. We look at the converse issue of the role of linguistic interpretation in reasoning tasks. Under the second heading we investigate the two foremost assumptions of current linguistic methodology, namely intuitions as the only adequate empirical basis of theories of meaning and grammar and the competence-performance distinction, arguing that these are among the heaviest burdens for a truly comprehensive approach to language. Marr’s three-level scheme is proposed as an alternative methodological framework, which we apply in a review of two ERP studies on semantic processing, to the ‘binding problem’ for language, and in a conclusive set of remarks on relating theories in the cognitive science of language.
Baggio, G., Van Lambalgen, M., & Hagoort, P. (2012). The processing consequences of compositionality. In M. Werning, W. Hinzen, & E. Machery (
Eds.), The Oxford handbook of compositionality (pp. 655-672). New York: Oxford University Press.
Bastiaansen, M. C. M., Mazaheri, A., & Jensen, O. (2012). Beyond ERPs: Oscillatory neuronal dynamics. In S. J. Luck, & E. S. Kappenman (
Eds.), The Oxford handbook of event-related potential components (pp. 31-50). New York, NY: Oxford University Press.
Bramão, I., Faísca, L., Petersson, K. M., & Reis, A. (2012). The contribution of color to object recognition. In I. Kypraios (
Ed.), Advances in object recognition systems (pp. 73-88). Rijeka, Croatia: InTech. Retrieved from http://www.intechopen.com/books/advances-in-object-recognition-systems/the-contribution-of-color-in-object-recognition.
AbstractThe cognitive processes involved in object recognition remain a mystery to the cognitive sciences. We know that the visual system recognizes objects via multiple features, including shape, color, texture, and motion characteristics. However, the way these features are combined to recognize objects is still an open question. The purpose of this contribution is to review the research about the specific role of color information in object recognition. Given that the human brain incorporates specialized mechanisms to handle color perception in the visual environment, it is a fair question to ask what functional role color might play in everyday vision.
Casasanto, D. (2012). Whorfian hypothesis. In J. L. Jackson, Jr. (
Ed.), Oxford Bibliographies Online: Anthropology. Oxford: Oxford University Press. doi:10.1093/OBO/9780199766567-0058.
AbstractIntroduction The Sapir-Whorf hypothesis (a.k.a. the Whorfian hypothesis) concerns the relationship between language and thought. Neither the anthropological linguist Edward Sapir (b. 1884–d. 1939) nor his student Benjamin Whorf (b. 1897–d. 1941) ever formally stated any single hypothesis about the influence of language on nonlinguistic cognition and perception. On the basis of their writings, however, two proposals emerged, generating decades of controversy among anthropologists, linguists, philosophers, and psychologists. According to the more radical proposal, linguistic determinism, the languages that people speak rigidly determine the way they perceive and understand the world. On the more moderate proposal, linguistic relativity, habits of using language influence habits of thinking. As a result, people who speak different languages think differently in predictable ways. During the latter half of the 20th century, the Sapir-Whorf hypothesis was widely regarded as false. Around the turn of the 21st century, however, experimental evidence reopened debate about the extent to which language shapes nonlinguistic cognition and perception. Scientific tests of linguistic determinism and linguistic relativity help to clarify what is universal in the human mind and what depends on the particulars of people’s physical and social experience. General Overviews and Foundational Texts Writing on the relationship between language and thought predates Sapir and Whorf, and extends beyond the academy. The 19th-century German philosopher Wilhelm von Humboldt argued that language constrains people’s worldview, foreshadowing the idea of linguistic determinism later articulated in Sapir 1929 and Whorf 1956 (Humboldt 1988). The intuition that language radically determines thought has been explored in works of fiction such as Orwell’s dystopian fantasy 1984 (Orwell 1949). Although there is little empirical support for radical linguistic determinism, more moderate forms of linguistic relativity continue to generate influential research, reviewed from an anthropologist’s perspective in Lucy 1997, from a psychologist’s perspective in Hunt and Agnoli 1991, and discussed from multidisciplinary perspectives in Gumperz and Levinson 1996 and Gentner and Goldin-Meadow 2003.
Chu, M., & Kita, S. (2012). The role of spontaneous gestures in spatial problem solving. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (
Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 57-68). Heidelberg: Springer.
AbstractWhen solving spatial problems, people often spontaneously produce hand gestures. Recent research has shown that our knowledge is shaped by the interaction between our body and the environment. In this article, we review and discuss evidence on: 1) how spontaneous gesture can reveal the development of problem solving strategies when people solve spatial problems; 2) whether producing gestures can enhance spatial problem solving performance. We argue that when solving novel spatial problems, adults go through deagentivization and internalization processes, which are analogous to young children’s cognitive development processes. Furthermore, gesture enhances spatial problem solving performance. The beneficial effect of gesturing can be extended to non-gesturing trials and can be generalized to a different spatial task that shares similar spatial transformation processes.
Hallé, P., & Cristia, A. (2012). Global and detailed speech representations in early language acquisition. In S. Fuchs, M. Weirich, D. Pape, & P. Perrier (
Eds.), Speech planning and dynamics (pp. 11-38). Frankfurt am Main: Peter Lang.
AbstractWe review data and hypotheses dealing with the mental representations for perceived and produced speech that infants build and use over the course of learning a language. In the early stages of speech perception and vocal production, before the emergence of a receptive or a productive lexicon, the dominant picture emerging from the literature suggests rather non-analytic representations based on units of the size of the syllable: Young children seem to parse speech into syllable-sized units in spite of their ability to detect sound equivalence based on shared phonetic features. Once a productive lexicon has emerged, word form representations are initially rather underspecified phonetically but gradually become more specified with lexical growth, up to the phoneme level. The situation is different for the receptive lexicon, in which phonetic specification for consonants and vowels seem to follow different developmental paths. Consonants in stressed syllables are somewhat well specified already at the first signs of a receptive lexicon, and become even better specified with lexical growth. Vowels seem to follow a different developmental path, with increasing flexibility throughout lexical development. Thus, children come to exhibit a consonant vowel asymmetry in lexical representations, which is clear in adult representations.
Ozyurek, A. (2012). Gesture. In R. Pfau, M. Steinbach, & B. Woll (
Eds.), Sign language: An international handbook (pp. 626-646). Berlin: Mouton.
AbstractGestures are meaningful movements of the body, the hands, and the face during communication, which accompany the production of both spoken and signed utterances. Recent research has shown that gestures are an integral part of language and that they contribute semantic, syntactic, and pragmatic information to the linguistic utterance. Furthermore, they reveal internal representations of the language user during communication in ways that might not be encoded in the verbal part of the utterance. Firstly, this chapter summarizes research on the role of gesture in spoken languages. Subsequently, it gives an overview of how gestural components might manifest themselves in sign languages, that is, in a situation in which both gesture and sign are expressed by the same articulators. Current studies are discussed that address the question of whether gestural components are the same or different in the two language modalities from a semiotic as well as from a cognitive and processing viewpoint. Understanding the role of gesture in both sign and spoken language contributes to our knowledge of the human language faculty as a multimodal communication system.
Peeters, D., Vanlangendonck, F., & Willems, R. M. (2012). Bestaat er een talenknobbel? Over taal in ons brein. In M. Boogaard, & M. Jansen (
Eds.), Alles wat je altijd al had willen weten over taal: De taalcanon (pp. 41-43). Amsterdam: Meulenhoff.
AbstractWanneer iemand goed is in het spreken van meerdere talen, wordt wel gezegd dat zo iemand een talenknobbel heeft. Iedereen weet dat dat niet letterlijk bedoeld is: iemand met een talenknobbel herkennen we niet aan een grote bult op zijn hoofd. Toch dacht men vroeger wel degelijk dat mensen een letterlijke talenknobbel konden ontwikkelen. Een goed ontwikkeld taalvermogen zou gepaard gaan met het groeien van het hersengebied dat hiervoor verantwoordelijk was. Dit deel van het brein zou zelfs zo groot kunnen worden dat het van binnenuit tegen de schedel drukte, met name rond de ogen. Nu weten we wel beter. Maar waar in het brein bevindt de taal zich dan wel precies?
De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Newman-Norlund, R., Hagoort, P., Levinson, S. C., & Toni, I. (2012). Exploring the cognitive infrastructure of communication. In B. Galantucci, & S. Garrod (
Eds.), Experimental Semiotics: Studies on the emergence and evolution of human communication (pp. 51-78). Amsterdam: Benjamins.
AbstractHuman communication is often thought about in terms of transmitted messages in a conventional code like a language. But communication requires a specialized interactive intelligence. Senders have to be able to perform recipient design, while receivers need to be able to do intention recognition, knowing that recipient design has taken place. To study this interactive intelligence in the lab, we developed a new task that taps directly into the underlying abilities to communicate in the absence of a conventional code. We show that subjects are remarkably successful communicators under these conditions, especially when senders get feedback from receivers. Signaling is accomplished by the manner in which an instrumental action is performed, such that instrumentally dysfunctional components of an action are used to convey communicative intentions. The findings have important implications for the nature of the human communicative infrastructure, and the task opens up a line of experimentation on human communication.
Files privateRequest files
Van Berkum, J. J. A. (2012). The electrophysiology of discourse and conversation. In M. J. Spivey, K. McRae, & M. F. Joanisse (
Eds.), The Cambridge handbook of psycholinguistics (pp. 589-614). New York: Cambridge University Press.
AbstractIntroduction: What’s happening in the brains of two people having a conversation? One reasonable guess is that in the fMRI scanner we’d see most of their brains light up. Another is that their EEG will be a total mess, reflecting dozens of interacting neuronal systems. Conversation recruits all of the basic language systems reviewed in this book. It also heavily taxes cognitive systems more likely to be found in handbooks of memory, attention and control, or social cognition (Brownell & Friedman, 2001). With most conversations going beyond the single utterance, for instance, they place a heavy load on episodic memory, as well as on the systems that allow us to reallocate cognitive resources to meet the demands of a dynamically changing situation. Furthermore, conversation is a deeply social and collaborative enterprise (Clark, 1996; this volume), in which interlocutors have to keep track of each others state of mind and coordinate on such things as taking turns, establishing common ground, and the goals of the conversation.