Publications

Displaying 501 - 600 of 943
  • de Lange, I. M., Helbig, K. L., Weckhuysen, S., Moller, R. S., Velinov, M., Dolzhanskaya, N., Marsh, E., Helbig, I., Devinsky, O., Tang, S., Mefford, H. C., Myers, C. T., van Paesschen, W., Striano, P., van Gassen, K., van Kempen, M., De Kovel, C. G. F., Piard, J., Minassian, B. A., Nezarati, M. M. and 12 morede Lange, I. M., Helbig, K. L., Weckhuysen, S., Moller, R. S., Velinov, M., Dolzhanskaya, N., Marsh, E., Helbig, I., Devinsky, O., Tang, S., Mefford, H. C., Myers, C. T., van Paesschen, W., Striano, P., van Gassen, K., van Kempen, M., De Kovel, C. G. F., Piard, J., Minassian, B. A., Nezarati, M. M., Pessoa, A., Jacquette, A., Maher, B., Balestrini, S., Sisodiya, S., Warde, M. T., De St Martin, A., Chelly, J., van 't Slot, R., Van Maldergem, L., Brilstra, E. H., & Koeleman, B. P. (2016). De novo mutations of KIAA2022 in females cause intellectual disability and intractable epilepsy. Journal of Medical Genetics, 53(12), 850-858. doi:10.1136/jmedgenet-2016-103909.

    Abstract

    Background Mutations in the KIAA2022 gene have been reported in male patients with X-linked intellectual disability, and related female carriers were unaffected. Here, we report 14 female patients who carry a heterozygous de novo KIAA2022 mutation and share a phenotype characterised by intellectual disability and epilepsy.

    Methods Reported females were selected for genetic testing because of substantial developmental problems and/or epilepsy. X-inactivation and expression studies were performed when possible.

    Results All mutations were predicted to result in a frameshift or premature stop. 12 out of 14 patients had intractable epilepsy with myoclonic and/or absence seizures, and generalised in 11. Thirteen patients had mild to severe intellectual disability. This female phenotype partially overlaps with the reported male phenotype which consists of more severe intellectual disability, microcephaly, growth retardation, facial dysmorphisms and, less frequently, epilepsy. One female patient showed completely skewed X-inactivation, complete absence of RNA expression in blood and a phenotype similar to male patients. In the six other tested patients, X-inactivation was random, confirmed by a non-significant twofold to threefold decrease of RNA expression in blood, consistent with the expected mosaicism between cells expressing mutant or normal KIAA2022 alleles.

    Conclusions Heterozygous loss of KIAA2022 expression is a cause of intellectual disability in females. Compared with its hemizygous male counterpart, the heterozygous female disease has less severe intellectual disability, but is more often associated with a severe and intractable myoclonic epilepsy.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2008). Increase in prefrontal cortical volume following cognitive behavioural therapy in patients with chronic fatigue syndrome. Brain, 131, 2172-2180. doi:10.1093/brain/awn140.

    Abstract

    Chronic fatigue syndrome (CFS) is a disabling disorder, characterized by persistent or relapsing fatigue. Recent studies have detected a decrease in cortical grey matter volume in patients with CFS, but it is unclear whether this cerebral atrophy constitutes a cause or a consequence of the disease. Cognitive behavioural therapy (CBT) is an effective behavioural intervention for CFS, which combines a rehabilitative approach of a graded increase in physical activity with a psychological approach that addresses thoughts and beliefs about CFS which may impair recovery. Here, we test the hypothesis that cerebral atrophy may be a reversible state that can ameliorate with successful CBT. We have quantified cerebral structural changes in 22 CFS patients that underwent CBT and 22 healthy control participants. At baseline, CFS patients had significantly lower grey matter volume than healthy control participants. CBT intervention led to a significant improvement in health status, physical activity and cognitive performance. Crucially, CFS patients showed a significant increase in grey matter volume, localized in the lateral prefrontal cortex. This change in cerebral volume was related to improvements in cognitive speed in the CFS patients. Our findings indicate that the cerebral atrophy associated with CFS is partially reversed after effective CBT. This result provides an example of macroscopic cortical plasticity in the adult human brain, demonstrating a surprisingly dynamic relation between behavioural state and cerebral anatomy. Furthermore, our results reveal a possible neurobiological substrate of psychotherapeutic treatment.
  • Lartseva, A. (2016). Reading emotions: How people with Autism Spectrum Disorders process emotional language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lattenkamp, E. Z., Mandák, M., & Scherz, M. D. (2016). The advertisement call of Stumpffia be Köhler, Vences, D'Cruze & Glaw, 2010 (Anura: Microhylidae: Cophylinae). Zootaxa, 4205(5), 483-485. doi:10.11646/zootaxa.4205.5.7.

    Abstract

    We describe the calls of Stumpffia be Köhler, Vences, D’Cruze & Glaw, 2010. This is the first call description made for a species belonging to the large-bodied northern Madagascan radiation of Stumpffia Boettger, 1881. Stumpffia is a genus of small (~9–28 mm) microhylid frogs in the Madagascar-endemic subfamily Cophylinae Cope. Little is known about their reproductive strategies. Most species are assumed to lay their eggs in foam nests in the leaf litter of Madagascar’s humid and semi-humid forests (Glaw & Vences 1994; Klages et al. 2013). They exhibit some degree of parental care, with the males guarding the nest after eggs are laid (Klages et al. 2013). The bioacoustic repertoire of these frogs is thought to be limited, and there are two distinct call structures known for the genus: the advertisement call of the type species, S. psologlossa Boettger, 1881, is apparently unique in being a trill of notes repeated in short succession. All other species from which calls are known emit single, whistling or chirping notes (Vences & Glaw 1991; Vences et al. 2006).

    Files private

    Request files
  • Lau, E., Weber, K., Gramfort, A., Hämäläinen, M., & Kuperberg, G. (2016). Spatiotemporal signatures of lexical–semantic prediction. Cerebral Cortex., 26(4), 1377-1387. doi:10.1093/cercor/bhu219.

    Abstract

    Although there is broad agreement that top-down expectations can facilitate lexical-semantic processing, the mechanisms driving these effects are still unclear. In particular, while previous electroencephalography (EEG) research has demonstrated a reduction in the N400 response to words in a supportive context, it is often challenging to dissociate facilitation due to bottom-up spreading activation from facilitation due to top-down expectations. The goal of the current study was to specifically determine the cortical areas associated with facilitation due to top-down prediction, using magnetoencephalography (MEG) recordings supplemented by EEG and functional magnetic resonance imaging (fMRI) in a semantic priming paradigm. In order to modulate expectation processes while holding context constant, we manipulated the proportion of related pairs across 2 blocks (10 and 50% related). Event-related potential results demonstrated a larger N400 reduction when a related word was predicted, and MEG source localization of activity in this time-window (350-450 ms) localized the differential responses to left anterior temporal cortex. fMRI data from the same participants support the MEG localization, showing contextual facilitation in left anterior superior temporal gyrus for the high expectation block only. Together, these results provide strong evidence that facilitatory effects of lexical-semantic prediction on the electrophysiological response 350-450 ms postonset reflect modulation of activity in left anterior temporal cortex.
  • Lausberg, H., & Sloetjes, H. (2016). The revised NEUROGES–ELAN system: An objective and reliable interdisciplinary analysis tool for nonverbal behavior and gesture. Behavior Research Methods, 48, 973-993. doi:10.3758/s13428-015-0622-z.

    Abstract

    As visual media spread to all domains of public and scientific life, nonverbal behavior is taking its place as an important form of communication alongside the written and spoken word. An objective and reliable method of analysis for hand movement behavior and gesture is therefore currently required in various scientific disciplines, including psychology, medicine, linguistics, anthropology, sociology, and computer science. However, no adequate common methodological standards have been developed thus far. Many behavioral gesture-coding systems lack objectivity and reliability, and automated methods that register specific movement parameters often fail to show validity with regard to psychological and social functions. To address these deficits, we have combined two methods, an elaborated behavioral coding system and an annotation tool for video and audio data. The NEUROGES–ELAN system is an effective and user-friendly research tool for the analysis of hand movement behavior, including gesture, self-touch, shifts, and actions. Since its first publication in 2009 in Behavior Research Methods, the tool has been used in interdisciplinary research projects to analyze a total of 467 individuals from different cultures, including subjects with mental disease and brain damage. Partly on the basis of new insights from these studies, the system has been revised methodologically and conceptually. The article presents the revised version of the system, including a detailed study of reliability. The improved reproducibility of the revised version makes NEUROGES–ELAN a suitable system for basic empirical research into the relation between hand movement behavior and gesture and cognitive, emotional, and interactive processes and for the development of automated movement behavior recognition methods.
  • Lawson, D., Jordan, F., & Magid, K. (2008). On sex and suicide bombing: An evaluation of Kanazawa’s ‘evolutionary psychological imagination’. Journal of Evolutionary Psychology, 6(1), 73-84. doi:10.1556/JEP.2008.1002.

    Abstract

    Kanazawa (2007) proposes the ‘evolutionary psychological imagination’ (p.7) as an authoritative framework for understanding complex social and public issues. As a case study of this approach, Kanazawa addresses acts of international terrorism, specifically suicide bombings committed by Muslim men. It is proposed that a comprehensive explanation of such acts can be gained from taking an evolutionary perspective armed with only three points of cultural knowledge: 1. Muslims are exceptionally polygynous, 2. Muslim men believe they will gain reproductive access to 72 virgins if they die as a martyr and 3. Muslim men have limited access to pornography, which might otherwise relieve the tension built up from intra-sexual competition. We agree with Kanazawa that evolutionary models of human behaviour can contribute to our understanding of even the most complex social issues. However, Kanazawa’s case study, of what he refers to as ‘World War III’, rests on a flawed theoretical argument, lacks empirical backing, and holds little in the way of explanatory power.
  • Lemke, J. R., Geider, K., Helbig, K. L., Heyne, H. O., Schutz, H., Hentschel, J., Courage, C., Depienne, C., Nava, C., Heron, D., Moller, R. S., Hjalgrim, H., Lal, D., Neubauer, B. A., Nurnberg, P., Thiele, H., Kurlemann, G., Arnold, G. L., Bhambhani, V., Bartholdi, D. and 38 moreLemke, J. R., Geider, K., Helbig, K. L., Heyne, H. O., Schutz, H., Hentschel, J., Courage, C., Depienne, C., Nava, C., Heron, D., Moller, R. S., Hjalgrim, H., Lal, D., Neubauer, B. A., Nurnberg, P., Thiele, H., Kurlemann, G., Arnold, G. L., Bhambhani, V., Bartholdi, D., Pedurupillay, C. R., Misceo, D., Frengen, E., Stromme, P., Dlugos, D. J., Doherty, E. S., Bijlsma, E. K., Ruivenkamp, C. A., Hoffer, M. J., Goldstein, A., Rajan, D. S., Narayanan, V., Ramsey, K., Belnap, N., Schrauwen, I., Richholt, R., Koeleman, B. P., Sa, J., Mendonca, C., De Kovel, C. G. F., Weckhuysen, S., Hardies, K., De Jonghe, P., De Meirleir, L., Milh, M., Badens, C., Lebrun, M., Busa, T., Francannet, C., Piton, A., Riesch, E., Biskup, S., Vogt, H., Dorn, T., Helbig, I., Michaud, J. L., Laube, B., & Syrbe, S. (2016). Delineating the GRIN1 phenotypic spectrum: A distinct genetic NMDA receptor encephalopathy. Neurology, 86(23), 2171-2178. doi:10.1212/wnl.0000000000002740.
  • Lenkiewicz, P., Pereira, M., Freire, M., & Fernandes, J. (2008). Accelerating 3D medical image segmentation with high performance computing. In Proceedings of the IEEE International Workshops on Image Processing Theory, Tools and Applications - IPT (pp. 1-8).

    Abstract

    Digital processing of medical images has helped physicians and patients during past years by allowing examination and diagnosis on a very precise level. Nowadays possibly the biggest deal of support it can offer for modern healthcare is the use of high performance computing architectures to treat the huge amounts of data that can be collected by modern acquisition devices. This paper presents a parallel processing implementation of an image segmentation algorithm that operates on a computer cluster equipped with 10 processing units. Thanks to well-organized distribution of the workload we manage to significantly shorten the execution time of the developed algorithm and reach a performance gain very close to linear.
  • Leonard, M., Baud, M., Sjerps, M. J., & Chang, E. (2016). Perceptual restoration of masked speech in human cortex. Nature Communications, 7: 13619. doi:10.1038/ncomms13619.

    Abstract

    Humans are adept at understanding speech despite the fact that our natural listening environment is often filled with interference. An example of this capacity is phoneme restoration, in which part of a word is completely replaced by noise, yet listeners report hearing the whole word. The neurological basis for this unconscious fill-in phenomenon is unknown, despite being a fundamental characteristic of human hearing. Here, using direct cortical recordings in humans, we demonstrate that missing speech is restored at the acoustic-phonetic level in bilateral auditory cortex, in real-time. This restoration is preceded by specific neural activity patterns in a separate language area, left frontal cortex, which predicts the word that participants later report hearing. These results demonstrate that during speech perception, missing acoustic content is synthesized online from the integration of incoming sensory cues and the internal neural dynamics that bias word-level expectation and prediction.

    Additional information

    ncomms13619-s1.pdf
  • Lev-Ari, S., & Peperkamp, S. (2016). How the demographic make-up of our community influences speech perception. The Journal of the Acoustical Society of America, 139(6), 3076-3087. doi:10.1121/1.4950811.

    Abstract

    Speech perception is known to be influenced by listeners’ expectations of the speaker. This paper tests whether the demographic makeup of individuals’ communities can influence their perception of foreign sounds by influencing their expectations of the language. Using online experiments with participants from all across the U.S. and matched census data on the proportion of Spanish and other foreign language speakers in participants’ communities, this paper shows that the demo- graphic makeup of individuals’ communities influences their expectations of foreign languages to have an alveolar trill versus a tap (Experiment 1), as well as their consequent perception of these sounds (Experiment 2). Thus, the paper shows that while individuals’ expectations of foreign lan- guage to have a trill occasionally lead them to misperceive a tap in a foreign language as a trill, a higher proportion of non-trill language speakers in one’s community decreases this likelihood. These results show that individuals’ environment can influence their perception by shaping their linguistic expectations
  • Lev-Ari, S. (2016). How the size of our social network influences our semantic skills. Cognitive Science, 40, 2050-2064. doi:10.1111/cogs.12317.

    Abstract

    People differ in the size of their social network, and thus in the properties of the linguistic input they receive. This article examines whether differences in social network size influence individuals’ linguistic skills in their native language, focusing on global comprehension of evaluative language. Study 1 exploits the natural variation in social network size and shows that individuals with larger social networks are better at understanding the valence of restaurant reviews. Study 2 manipulated social network size by randomly assigning participants to learn novel evaluative words as used by two (small network) versus eight (large network) speakers. It replicated the finding from Study 1, showing that those exposed to a larger social network were better at comprehending the valence of product reviews containing the novel words that were written by novel speakers. Together, these studies show that the size of one's social network can influence success at language comprehension. They thus open the door to research on how individuals’ lifestyle and the nature of their social interactions can influence linguistic skills.
  • Lev-Ari, S. (2016). Studying individual differences in the social environment to better understand language learning and processing. Linguistics Vanguard, 2(s1), 13-22. doi:10.1515/lingvan-2016-0015.
  • Lev-Ari, S. (2016). Selective grammatical convergence: Learning from desirable speakers. Discourse Processes, 53(8), 657-674. doi:10.1080/0163853X.2015.1094716.

    Abstract

    Models of language learning often assume that we learn from all the input we receive. This assumption is particularly strong in the domain of short-term and long-term grammatical convergence, where researchers argue that grammatical convergence is mostly an automatic process insulated from social factors. This paper shows that the degree to which individuals learn from grammatical input is modulated by social and contextual factors, such as the degree to which the speaker is liked and their social standing. Furthermore, such modulation is found in experiments that test generalized learning rather than convergence during the interaction. This paper thus shows the importance of the social context in grammatical learning, and indicates that the social context should be integrated into models of language learning.
  • Levelt, W. J. M. (2016). Localism versus holism. Historical origins of studying language in the brain. In R. Rubens, & M. Van Dijk (Eds.), Sartoniana vol. 29 (pp. 37-60). Ghent: Ghent University.
  • Levelt, W. J. M. (2016). The first golden age of psycholinguistics 1865-World War I. In R. Rubens, & M. Van Dyck (Eds.), Sartoniana vol. 29 (pp. 15-36). Ghent: Ghent University.
  • Levelt, W. J. M., & De Swaan, A. (2016). Levensbericht Nico Frijda. In Koninklijke Nederlandse Akademie van Wetenschappen (Ed.), Levensberichten en herdenkingen 2016 (pp. 16-25). Amsterdam: KNAW.
  • Levelt, W. J. M., & Ruijssenaars, A. (1995). Levensbericht Johan Joseph Dumont. In Jaarboek Koninklijke Nederlandse Akademie van Wetenschappen (pp. 31-36).
  • Levelt, W. J. M. (1995). Chapters of psychology: An interview with Wilhelm Wundt. In R. L. Solso, & D. W. Massaro (Eds.), The science of mind: 2001 and beyond (pp. 184-202). Oxford University Press.
  • Levelt, W. J. M. (1970). A scaling approach to the study of syntactic relations. In G. B. Flores d'Arcais, & W. J. M. Levelt (Eds.), Advances in psycholinguistics (pp. 109-121). Amsterdam: North Holland.
  • Levelt, W. J. M., Zwanenburg, W., & Ouweneel, G. R. E. (1970). Ambiguous surface structure and phonetic form in French. Foundations of Language, 6(2), 260-273.

    Abstract

    In modern approaches to phonology a lack of clarity exists on the issue of whether phonetic facts are psychological or physical realities. The results from an experiment suggest that phonetic facts can be considered as psychological realities, but with the restriction that they can (but not necessarily always do) take acoustical shape. More specifically, the syntactic material consisted of ambiguous French sentences of the following sort: On a tourné ce film intéressant pour les étudiants. They were spoken (a) in disambiguating contexts, without the (four) readers noticing the ambiguities, and (b) without context, but with the instruction to make a conscious effort to disambiguate. By tape splicing, the contexts were removed from the context-embedded sentences. Twenty-eight native speakers of French listened to the sentences and judged whether one or the other meaning had been intended by the speaker. Subjects performed significantly above chance: 60% correct identifications for context-embedded sentences, 75% for context-free sentences. Pitch-amplitude analyses were made to determine the acoustical differences involved.
  • Levelt, W. J. M. (2008). An introduction to the theory of formal languages and automata. Amsterdam: John Benjamins.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (2008). Formal grammars in linguistics and psycholinguistics [Re-ed.]. Amsterdam: Benjamins.

    Abstract

    Contains: Vol. 1 An introduction to the theory of formal languages and automata Vol. 2 Applications in linguistic theory Vol. 3 Psycholinguistic applications

    Additional information

    Table of contents
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1970). Hierarchical chunking in sentence processing. Perception & Psychophysics, 8(2), 99-103.
  • Levelt, W. J. M. (1970). Hierarchical clustering algorithms in the psychology of grammar. In G. B. Flores d'Arcais, & W. J. M. Levelt (Eds.), Advances in psycholinguistics (pp. 101-108). Amsterdam: North Holland.
  • Levelt, W. J. M. (1995). Hoezo 'neuro'? Hoezo 'linguïstisch'? Intermediair, 31(46), 32-37.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1980). On-line processing constraints on the properties of signed and spoken language. In U. Bellugi, & M. Studdert-Kennedy (Eds.), Signed and spoken language: Biological constraints on linguistic form (pp. 141-160). Weinheim: Verlag Chemie.

    Abstract

    It is argued that the dominantly successive nature of language is largely mode-independent and holds equally for sign and for spoken language. A preliminary distinction is made between what is simultaneous or successive in the signal, and what is in the process; these need not coincide, and it is the successiveness of the process that is at stake. It is then discussed extensively for the word/sign level, and in a more preliminary fashion for the clause and discourse level that online processes are parallel in that they can simultaneously draw on various sources of knowledge (syntactic, semantic, pragmatic), but successive in that they can work at the interpretation of only one unit at a time. This seems to hold for both sign and spoken language. In the final section, conjectures are made about possible evolutionary explanations for these properties of language processing.
  • Levelt, W. J. M. (1995). Psycholinguistics. In C. C. French, & A. M. Colman (Eds.), Cognitive psychology (reprint, pp. 39- 57). London: Longman.
  • Levelt, W. J. M. (1995). The ability to speak: From intentions to spoken words. European Review, 3(1), 13-23. doi:10.1017/S1062798700001290.

    Abstract

    In recent decades, psychologists have become increasingly interested in our ability to speak. This paper sketches the present theoretical perspective on this most complex skill of homo sapiens. The generation of fluent speech is based on the interaction of various processing components. These mechanisms are highly specialized, dedicated to performing specific subroutines, such as retrieving appropriate words, generating morpho-syntactic structure, computing the phonological target shape of syllables, words, phrases and whole utterances, and creating and executing articulatory programmes. As in any complex skill, there is a self-monitoring mechanism that checks the output. These component processes are targets of increasingly sophisticated experimental research, of which this paper presents a few salient examples.
  • Levelt, W. J. M. (2008). Speaking [Korean edition]. Seoul: Korean Research Foundation.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (2008). What has become of formal grammars in linguistics and psycholinguistics? [Postscript]. In Formal Grammars in linguistics and psycholinguistics (pp. 1-17). Amsterdam: John Benjamins.
  • Levelt, W. J. M. (1980). Toegepaste aspecten van het taal-psychologisch onderzoek: Enkele inleidende overwegingen. In J. Matter (Ed.), Toegepaste aspekten van de taalpsychologie (pp. 3-11). Amsterdam: VU Boekhandel.
  • Levinson, S. C. (1995). 'Logical' Connectives in Natural Language: A First Questionnaire. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 61-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513476.

    Abstract

    It has been hypothesised that human reasoning has a non-linguistic foundation, but is nevertheless influenced by the formal means available in a language. For example, Western logic is transparently related to European sentential connectives (e.g., and, if … then, or, not), some of which cannot be unambiguously expressed in other languages. The questionnaire explores reasoning tools and practices through investigating translation equivalents of English sentential connectives and collecting examples of “reasoned arguments”.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2016). “Process and perish” or multiple buffers with push-down stacks? [Commentary on Christiansen & Slater]. Behavioral and Brain Sciences, 39: e81. doi:10.1017/S0140525X15000862.

    Abstract

    This commentary raises two issues: (1) Language processing is hastened not only by internal pressures but also externally by turntaking in language use; (2) the theory requires nested levels of processing, but linguistic levels do not fully nest; further, it would seem to require multiple memory buffers, otherwise there’s no obvious treatment for discontinuous structures, or for verbatim recall.
  • Levinson, S. C. (2008). Landscape, seascape and the ontology of places on Rossel Island, Papua New Guinea. Language Sciences, 30(2/3), 256-290. doi:10.1016/j.langsci.2006.12.032.

    Abstract

    This paper describes the descriptive landscape and seascape terminology of an isolate language, Yélî Dnye, spoken on a remote island off Papua New Guinea. The terminology reveals an ontology of landscape terms fundamentally mismatching that in European languages, and in current GIS applications. These landscape terms, and a rich set of seascape terms, provide the ontological basis for toponyms across subdomains. Considering what motivates landscape categorization, three factors are considered: perceptual salience, human affordance and use, and cultural ideas. The data show that cultural ideas and practices are the major categorizing force: they directly impact the ecology with environmental artifacts, construct religious ideas which play a major role in the use of the environment and its naming, and provide abstract cultural templates which organize large portions of vocabulary across subdomains.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2016). Language and mind: Let's get the issues straight! In S. D. Blum (Ed.), Making sense of language: Readings in culture and communication [3rd ed.] (pp. 68-80). Oxford: Oxford University Press.
  • Levinson, S. C. (1995). Interactional biases in human thinking. In E. N. Goody (Ed.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2016). The countable singulare tantum. In A. Reuneker, R. Boogaart, & S. Lensink (Eds.), Aries netwerk: Een constructicon (pp. 145-146). Leiden: Leiden University.
  • Levinson, S. C. (2008). Space in language and cognition. Singapore: Word Publishing Company/CUP.

    Abstract

    Chinese translation of the 2003 publication.
  • Levinson, S. C. (1980). Speech act theory: The state of the art. Language teaching and linguistics: Abstracts, 5-24.

    Abstract

    Survey article
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C., & Majid, A. (2008). Preface and priorities. In A. Majid (Ed.), Field manual volume 11 (pp. iii-iv). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (1995). Three levels of meaning. In F. Palmer (Ed.), Grammar and meaning: Essays in honour of Sir John Lyons (pp. 90-115). Cambridge University Press.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2008). Time and space questionnaire. In A. Majid (Ed.), Field Manual Volume 11 (pp. 42-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492955.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levinson, S. C. (2016). Turn-taking in human communication, origins, and implications for language processing. Trends in Cognitive Sciences, 20(1), 6-14. doi:10.1016/j.tics.2015.10.010.

    Abstract

    Most language usage is interactive, involving rapid turn-taking. The turn-taking system has a number of striking properties: turns are short and responses are remarkably rapid, but turns are of varying length and often of very complex construction such that the underlying cognitive processing is highly compressed. Although neglected in cognitive science, the system has deep implications for language processing and acquisition that are only now becoming clear. Appearing earlier in ontogeny than linguistic competence, it is also found across all the major primate clades. This suggests a possible phylogenetic continuity, which may provide key insights into language evolution.
  • Levshina, N. (2016). When variables align: A Bayesian multinomial mixed-effects model of English permissive constructions. Cognitive Linguistics, 27(2), 235-268. doi:10.1515/cog-2015-0054.
  • Lewis, A. G., Schoffelen, J.-M., Schriefers, H., & Bastiaansen, M. C. M. (2016). A Predictive Coding Perspective on Beta Oscillations during Sentence-Level Language Comprehension. Frontiers in Human Neuroscience, 10: 85. doi:10.3389/fnhum.2016.00085.

    Abstract

    Oscillatory neural dynamics have been steadily receiving more attention as a robust and temporally precise signature of network activity related to language processing. We have recently proposed that oscillatory dynamics in the beta and gamma frequency ranges measured during sentence-level comprehension might be best explained from a predictive coding perspective. Under our proposal we related beta oscillations to both the maintenance/change of the neural network configuration responsible for the construction and representation of sentence-level meaning, and to top–down predictions about upcoming linguistic input based on that sentence-level meaning. Here we zoom in on these particular aspects of our proposal, and discuss both old and new supporting evidence. Finally, we present some preliminary magnetoencephalography data from an experiment comparing Dutch subject- and object-relative clauses that was specifically designed to test our predictive coding framework. Initial results support the first of the two suggested roles for beta oscillations in sentence-level language comprehension.
  • Lewis, A. G., Lemhӧfer, K., Schoffelen, J.-M., & Schriefers, H. (2016). Gender agreement violations modulate beta oscillatory dynamics during sentence comprehension: A comparison of second language learners and native speakers. Neuropsychologia, 89(1), 254-272. doi:10.1016/j.neuropsychologia.2016.06.031.

    Abstract

    For native speakers, many studies suggest a link between oscillatory neural activity in the beta frequency range and syntactic processing. For late second language (L2) learners on the other hand, the extent to which the neural architecture supporting syntactic processing is similar to or different from that of native speakers is still unclear. In a series of four experiments, we used electroencephalography to investigate the link between beta oscillatory activity and the processing of grammatical gender agreement in Dutch determiner-noun pairs, for Dutch native speakers, and for German L2 learners of Dutch. In Experiment 1 we show that for native speakers, grammatical gender agreement violations are yet another among many syntactic factors that modulate beta oscillatory activity during sentence comprehension. Beta power is higher for grammatically acceptable target words than for those that mismatch in grammatical gender with their preceding determiner. In Experiment 2 we observed no such beta modulations for L2 learners, irrespective of whether trials were sorted according to objective or subjective syntactic correctness. Experiment 3 ruled out that the absence of a beta effect for the L2 learners in Experiment 2 was due to repetition of the target nouns in objectively correct and incorrect determiner-noun pairs. Finally, Experiment 4 showed that when L2 learners are required to explicitly focus on grammatical information, they show modulations of beta oscillatory activity, comparable to those of native speakers, but only when trials are sorted according to participants’ idiosyncratic lexical representations of the grammatical gender of target nouns. Together, these findings suggest that beta power in L2 learners is sensitive to violations of grammatical gender agreement, but only when the importance of grammatical information is highlighted, and only when participants' subjective lexical representations are taken into account.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2008). Twelve-month-olds communicate helpfully and appropriately for knowledgeable and ignorant partners. Cognition, 108(3), 732-739. doi:10.1016/j.cognition.2008.06.013.

    Abstract

    In the current study we investigated whether 12-month-old infants gesture appropriately for knowledgeable versus ignorant partners, in order to provide them with needed information. In two experiments we found that in response to a searching adult, 12-month-olds pointed more often to an object whose location the adult did not know and thus needed information to find (she had not seen it fall down just previously) than to an object whose location she knew and thus did not need information to find (she had watched it fall down just previously). These results demonstrate that, in contrast to classic views of infant communication, infants’ early pointing at 12 months is already premised on an understanding of others’ knowledge and ignorance, along with a prosocial motive to help others by providing needed information.
  • Liszkowski, U. (2008). Before L1: A differentiated perspective on infant gestures. Gesture, 8(2), 180-196. doi:10.1075/gest.8.2.04lis.

    Abstract

    This paper investigates the social-cognitive and motivational complexities underlying prelinguistic infants' gestural communication. With regard to deictic referential gestures, new and recent experimental evidence shows that infant pointing is a complex communicative act based on social-cognitive skills and cooperative motives. With regard to infant representational gestures, findings suggest the need to re-interpret these gestures as initially non-symbolic gestural social acts. Based on the available empirical evidence, the paper argues that deictic referential communication emerges as a foundation of human communication first in gestures, already before language. Representational symbolic communication, instead, emerges as a transformation of deictic communication first in the vocal modality and, perhaps, in gestures through non-symbolic, socially situated routines.
  • Liszkowski, U., Albrecht, K., Carpenter, M., & Tomasello, M. (2008). Infants’ visual and auditory communication when a partner is or is not visually attending. Infant Behavior and Development, 31(2), 157-167. doi:10.1016/j.infbeh.2007.10.011.
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Emergence of signal structure: Effects of duration constraints. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Recent work has investigated the emergence of structure in speech using experiments which use artificial continuous signals. Some experiments have had no limit on the duration which signals can have (e.g. Verhoef et al., 2014), and others have had time limitations (e.g. Verhoef et al., 2015). However, the effect of time constraints on the structure in signals has never been experimentally investigated.
  • Little, H., & de Boer, B. (2016). Did the pressure for discrimination trigger the emergence of combinatorial structure? In Proceedings of the 2nd Conference of the International Association for Cognitive Semiotics (pp. 109-110).
  • Little, H., Eryılmaz, K., & De Boer, B. (2016). Differing signal-meaning dimensionalities facilitates the emergence of structure. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/25.html.

    Abstract

    Structure of language is not only caused by cognitive processes, but also by physical aspects of the signalling modality. We test the assumptions surrounding the role which the physical aspects of the signal space will have on the emergence of structure in speech. Here, we use a signal creation task to test whether a signal space and a meaning space having similar dimensionalities will generate an iconic system with signal-meaning mapping and whether, when the topologies differ, the emergence of non-iconic structure is facilitated. In our experiments, signals are created using infrared sensors which use hand position to create audio signals. We find that people take advantage of signal-meaning mappings where possible. Further, we use trajectory probabilities and measures of variance to show that when there is a dimensionality mismatch, more structural strategies are used.
  • Little, H. (2016). Nahran Bhannamz: Language Evolution in an Online Zombie Apocalypse Game. In Createvolang: creativity and innovation in language evolution.
  • Lockwood, G. (2016). Academic clickbait: Articles with positively-framed titles, interesting phrasing, and no wordplay get more attention online. The Winnower, 3: e146723.36330. doi:10.15200/winn.146723.36330.

    Abstract

    This article is about whether the factors which drive online sharing of non-scholarly content also apply to academic journal titles. It uses Altmetric scores as a measure of online attention to articles from Frontiers in Psychology published in 2013 and 2014. Article titles with result-oriented positive framing and more interesting phrasing receive higher Altmetric scores, i.e., get more online attention. Article titles with wordplay and longer article titles receive lower Altmetric scores. This suggests that the same factors that affect how widely non-scholarly content is shared extend to academia, which has implications for how academics can make their work more likely to have more impact.
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). How iconicity helps people learn new words: neural correlates and individual differences in sound-symbolic bootstrapping. Collabra, 2(1): 7. doi:10.1525/collabra.42.

    Abstract

    Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound-symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences) or the opposite meaning (in which form and meaning show cross-modal clashes). Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs) during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word learning harder, especially for people who are more sensitive to sound symbolism.

    Additional information

    https://osf.io/ema3t/
  • Lockwood, G., Dingemanse, M., & Hagoort, P. (2016). Sound-symbolism boosts novel word learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(8), 1274-1281. doi:10.1037/xlm0000235.

    Abstract

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory information, to investigate how sensitive Dutch speakers are to sound-symbolism in Japanese in a learning task. Participants were taught 2 sets of Japanese ideophones; 1 set with the ideophones’ real meanings in Dutch, the other set with their opposite meanings. In Experiment 1, participants learned the ideophones and their real meanings much better than the ideophones with their opposite meanings. Moreover, despite the learning rounds, participants were still able to guess the real meanings of the ideophones in a 2-alternative forced-choice test after they were informed of the manipulation. This shows that natural language sound-symbolism is robust beyond 2-alternative forced-choice paradigms and affects broader language processes such as word learning. In Experiment 2, participants learned regular Japanese adjectives with the same manipulation, and there was no difference between real and opposite conditions. This shows that natural language sound-symbolism is especially strong in ideophones, and that people learn words better when form and meaning match. The highlights of this study are as follows: (a) Dutch speakers learn real meanings of Japanese ideophones better than opposite meanings, (b) Dutch speakers accurately guess meanings of Japanese ideophones, (c) this sensitivity happens despite learning some opposite pairings, (d) no such learning effect exists for regular Japanese adjectives, and (e) this shows the importance of sound-symbolism in scaffolding language learning
  • Lockwood, G., Hagoort, P., & Dingemanse, M. (2016). Synthesized Size-Sound Sound Symbolism. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1823-1828). Austin, TX: Cognitive Science Society.

    Abstract

    Studies of sound symbolism have shown that people can associate sound and meaning in consistent ways when presented with maximally contrastive stimulus pairs of nonwords such as bouba/kiki (rounded/sharp) or mil/mal (small/big). Recent work has shown the effect extends to antonymic words from natural languages and has proposed a role for shared cross-modal correspondences in biasing form-to-meaning associations. An important open question is how the associations work, and particularly what the role is of sound-symbolic matches versus mismatches. We report on a learning task designed to distinguish between three existing theories by using a spectrum of sound-symbolically matching, mismatching, and neutral (neither matching nor mismatching) stimuli. Synthesized stimuli allow us to control for prosody, and the inclusion of a neutral condition allows a direct test of competing accounts. We find evidence for a sound-symbolic match boost, but not for a mismatch difficulty compared to the neutral condition.
  • Lucas, C., Griffiths, T., Xu, F., & Fawcett, C. (2008). A rational model of preference learning and choice prediction by children. In D. Koller, Y. Bengio, D. Schuurmans, L. Bottou, & A. Culotta (Eds.), Advances in Neural Information Processing Systems.

    Abstract

    Young children demonstrate the ability to make inferences about the preferences of other agents based on their choices. However, there exists no overarching account of what children are doing when they learn about preferences or how they use that knowledge. We use a rational model of preference learning, drawing on ideas from economics and computer science, to explain the behavior of children in several recent experiments. Specifically, we show how a simple econometric model can be extended to capture two- to four-year-olds’ use of statistical information in inferring preferences, and their generalization of these preferences.
  • Macuch Silva, V., & Roberts, S. G. (2016). Language adapts to signal disruption in interaction. In S. G. Roberts, C. Cuskley, L. McCrohon, L. Barceló-Coblijn, O. Feher, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Retrieved from http://evolang.org/neworleans/papers/20.html.

    Abstract

    Linguistic traits are often seen as reflecting cognitive biases and constraints (e.g. Christiansen & Chater, 2008). However, language must also adapt to properties of the channel through which communication between individuals occurs. Perhaps the most basic aspect of any communication channel is noise. Communicative signals can be blocked, degraded or distorted by other sources in the environment. This poses a fundamental problem for communication. On average, channel disruption accompanies problems in conversation every 3 minutes (27% of cases of other-initiated repair, Dingemanse et al., 2015). Linguistic signals must adapt to this harsh environment. While modern language structures are robust to noise (e.g. Piantadosi et al., 2011), we investigate how noise might have shaped the early emergence of structure in language. The obvious adaptation to noise is redundancy. Signals which are maximally different from competitors are harder to render ambiguous by noise. Redundancy can be increased by adding differentiating segments to each signal (increasing the diversity of segments). However, this makes each signal more complex and harder to learn. Under this strategy, holistic languages may emerge. Another strategy is reduplication - repeating parts of the signal so that noise is less likely to disrupt all of the crucial information. This strategy does not increase the difficulty of learning the language - there is only one extra rule which applies to all signals. Therefore, under pressures for learnability, expressivity and redundancy, reduplicated signals are expected to emerge. However, reduplication is not a pervasive feature of words (though it does occur in limited domains like plurals or iconic meanings). We suggest that this is due to the pressure for redundancy being lifted by conversational infrastructure for repair. Receivers can request that senders repeat signals only after a problem occurs. That is, robustness is achieved by repeating the signal across conversational turns (when needed) instead of within single utterances. As a proof of concept, we ran two iterated learning chains with pairs of individuals in generations learning and using an artificial language (e.g. Kirby et al., 2015). The meaning space was a structured collection of unfamiliar images (3 shapes x 2 textures x 2 outline types). The initial language for each chain was the same written, unstructured, fully expressive language. Signals produced in each generation formed the training language for the next generation. Within each generation, pairs played an interactive communication game. The director was given a target meaning to describe, and typed a word for the matcher, who guessed the target meaning from a set. With a 50% probability, a contiguous section of 3-5 characters in the typed word was replaced by ‘noise’ characters (#). In one chain, the matcher could initiate repair by requesting that the director type and send another signal. Parallel generations across chains were matched for the number of signals sent (if repair was initiated for a meaning, then it was presented twice in the parallel generation where repair was not possible) and noise (a signal for a given meaning which was affected by noise in one generation was affected by the same amount of noise in the parallel generation). For the final set of signals produced in each generation we measured the signal redundancy (the zip compressibility of the signals), the character diversity (entropy of the characters of the signals) and systematic structure (z-score of the correlation between signal edit distance and meaning hamming distance). In the condition without repair, redundancy increased with each generation (r=0.97, p=0.01), and the character diversity decreased (r=-0.99,p=0.001) which is consistent with reduplication, as shown below (part of the initial and the final language): Linear regressions revealed that generations with repair had higher overall systematic structure (main effect of condition, t = 2.5, p < 0.05), increasing character diversity (interaction between condition and generation, t = 3.9, p = 0.01) and redundancy increased at a slower rate (interaction between condition and generation, t = -2.5, p < 0.05). That is, the ability to repair counteracts the pressure from noise, and facilitates the emergence of compositional structure. Therefore, just as systems to repair damage to DNA replication are vital for the evolution of biological species (O’Brien, 2006), conversational repair may regulate replication of linguistic forms in the cultural evolution of language. Future studies should further investigate how evolving linguistic structure is shaped by interaction pressures, drawing on experimental methods and naturalistic studies of emerging languages, both spoken (e.g Botha, 2006; Roberge, 2008) and signed (e.g Senghas, Kita, & Ozyurek, 2004; Sandler et al., 2005).
  • Magyari, L., & De Ruiter, J. P. (2008). Timing in conversation: The anticipation of turn endings. In J. Ginzburg, P. Healey, & Y. Sato (Eds.), Proceedings of the 12th Workshop on the Semantics and Pragmatics Dialogue (pp. 139-146). London: King's college.

    Abstract

    We examined how communicators can switch between speaker and listener role with such accurate timing. During conversations, the majority of role transitions happens with a gap or overlap of only a few hundred milliseconds. This suggests that listeners can predict when the turn of the current speaker is going to end. Our hypothesis is that listeners know when a turn ends because they know how it ends. Anticipating the last words of a turn can help the next speaker in predicting when the turn will end, and also in anticipating the content of the turn, so that an appropriate response can be prepared in advance. We used the stimuli material of an earlier experiment (De Ruiter, Mitterer & Enfield, 2006), in which subjects were listening to turns from natural conversations and had to press a button exactly when the turn they were listening to ended. In the present experiment, we investigated if the subjects can complete those turns when only an initial fragment of the turn is presented to them. We found that the subjects made better predictions about the last words of those turns that had more accurate responses in the earlier button press experiment.
  • Magyari, L. (2008). A mentális lexikon modelljei és a magyar nyelv (Models of mental lexicon and the Hungarian language). In J. Gervain, & C. Pléh (Eds.), A láthatatlan nyelv (Invisible Language). Budapest: Gondolat Kiadó.
  • Majid, A., van Leeuwen, T., & Dingemanse, M. (2008). Synaesthesia: A cross-cultural pilot. In A. Majid (Ed.), Field manual volume 11 (pp. 37-41). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492960.

    Abstract

    This Field Manual entry has been superceded by the 2009 version:
    https://doi.org/10.17617/2.883570

    Files private

    Request files
  • Majid, A., Boster, J. S., & Bowerman, M. (2008). The cross-linguistic categorization of everyday events: A study of cutting and breaking. Cognition, 109(2), 235-250. doi:10.1016/j.cognition.2008.08.009.

    Abstract

    The cross-linguistic investigation of semantic categories has a long history, spanning many disciplines and covering many domains. But the extent to which semantic categories are universal or language-specific remains highly controversial. Focusing on the domain of events involving material destruction (“cutting and breaking” events, for short), this study investigates how speakers of different languages implicitly categorize such events through the verbs they use to talk about them. Speakers of 28 typologically, genetically and geographically diverse languages were asked to describe the events shown in a set of videoclips, and the distribution of their verbs across the events was analyzed with multivariate statistics. The results show that there is considerable agreement across languages in the dimensions along which cutting and breaking events are distinguished, although there is variation in the number of categories and the placement of their boundaries. This suggests that there are strong constraints in human event categorization, and that variation is played out within a restricted semantic space.
  • Majid, A. (2008). Conceptual maps using multivariate statistics: Building bridges between typological linguistics and psychology [Commentary on Inferring universals from grammatical variation: Multidimensional scaling for typological analysis by William Croft and Keith T. Poole]. Theoretical Linguistics, 34(1), 59-66. doi:10.1515/THLI.2008.005.
  • Majid, A., & Huettig, F. (2008). A crosslinguistic perspective on semantic cognition [commentary on Precis of Semantic cognition: A parallel distributed approach by Timothy T. Rogers and James L. McClelland]. Behavioral and Brain Sciences, 31(6), 720-721. doi:10.1017/S0140525X08005967.

    Abstract

    Coherent covariation appears to be a powerful explanatory factor accounting for a range of phenomena in semantic cognition. But its role in accounting for the crosslinguistic facts is less clear. Variation in naming, within the same semantic domain, raises vexing questions about the necessary parameters needed to account for the basic facts underlying categorization.
  • Majid, A., & Levinson, S. C. (2008). Language does provide support for basic tastes [Commentary on A study of the science of taste: On the origins and influence of the core ideas by Robert P. Erickson]. Behavioral and Brain Sciences, 31, 86-87. doi:10.1017/S0140525X08003476.

    Abstract

    Recurrent lexicalization patterns across widely different cultural contexts can provide a window onto common conceptualizations. The cross-linguistic data support the idea that sweet, salt, sour, and bitter are basic tastes. In addition, umami and fatty are likely basic tastes, as well.
  • Majid, A. (Ed.). (2008). Field manual volume 11. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Majid, A. (2008). Focal colours. In A. Majid (Ed.), Field Manual Volume 11 (pp. 8-10). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492958.

    Abstract

    In this task we aim to find what the best exemplars or “focal colours” of each basic colour term is in our field languages. This is an important part of the evidence we need in order to understand the colour data collected using 'The Language of Vision I: Colour'. This task consists of an experiment where participants pick out the best exemplar for the colour terms in their language. The goal is to establish language specific focal colours.
  • Majid, A. (2016). The content of minds: Asifa Majid talks to Jon Sutton about language and thought. The psychologist, 29, 554-556.
  • Majid, A. (2016). Was wir von anderen Kulturen über den Geruchsinn lernen können. In Museum Tinguely (Ed.), Belle Haleine – Der Duft der Kunst. Interdisziplinäres Symposium (pp. 73-79). Heidelberg: Kehrer.
  • Majid, A. (2016). What other cultures can tell us about the sense of smell. In Museum Tinguely (Ed.), Belle haleine - the scent of art: interdisciplinary symposium (pp. 72-77). Heidelberg: Kehrer.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2008). Discourse structure and relative clause processing. Memory & Cognition, 36(1), 170-181. doi:10.3758/MC.36.1.170.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • Malt, B. C., Gennari, S., Imai, M., Ameel, E., Tsuda, N., & Majid, A. (2008). Talking about walking: Biomechanics and the language of locomotion. Psychological Science, 19(3), 232-240. doi:10.1111/j.1467-9280.2008.02074.x.

    Abstract

    What drives humans around the world to converge in certain ways in their naming while diverging dramatically in others? We studied how naming patterns are constrained by investigating whether labeling of human locomotion reflects the biomechanical discontinuity between walking and running gaits. Similarity judgments of a student locomoting on a treadmill at different slopes and speeds revealed perception of this discontinuity. Naming judgments of the same clips by speakers of English, Japanese, Spanish, and Dutch showed lexical distinctions between walking and running consistent with the perceived discontinuity. Typicality judgments showed that major gait terms of the four languages share goodness-of-example gradients. These data demonstrate that naming reflects the biomechanical discontinuity between walking and running and that shared elements of naming can arise from correlations among stimulus properties that are dynamic and fleeting. The results support the proposal that converging naming patterns reflect structure in the world, not only acts of construction by observers.
  • Mani, N., Daum, M., & Huettig, F. (2016). “Pro-active” in many ways: Developmental evidence for a dynamic pluralistic approach to prediction. Quarterly Journal of Experimental Psychology, 69(11), 2189-2201. doi:10.1080/17470218.2015.1111395.

    Abstract

    The anticipation of the forthcoming behaviour of social interaction partners is a useful ability supporting interaction and communication between social partners. Associations and prediction based on the production system (in line with views that listeners use the production system covertly to anticipate what the other person might be likely to say) are two potential factors, which have been proposed to be involved in anticipatory language processing. We examined the influence of both factors on the degree to which listeners predict upcoming linguistic input. Are listeners more likely to predict book as an appropriate continuation of the sentence “The boy reads a”, based on the strength of the association between the words read and book (strong association) and read and letter (weak association)? Do more proficient producers predict more? What is the interplay of these two influences on prediction? The results suggest that associations influence language-mediated anticipatory eye gaze in two-year-olds and adults only when two thematically appropriate target objects compete for overt attention but not when these objects are presented separately. Furthermore, children’s prediction abilities are strongly related to their language production skills when appropriate target objects are presented separately but not when presented together. Both influences on prediction in language processing thus appear to be context-dependent. We conclude that multiple factors simultaneously influence listeners’ anticipation of upcoming linguistic input and that only such a dynamic approach to prediction can capture listeners’ prowess at predictive language processing.
  • Manrique, E. (2016). Other-initiated repair in Argentine Sign Language. Open Linguistics, 2, 1-34. doi:10.1515/opli-2016-0001.

    Abstract

    Other-initiated repair is an essential interactional practice to secure mutual understanding in everyday interaction. This article presents evidence from a large conversational corpus of a sign language, showing that signers of Argentine Sign Language (Lengua de Señas Argentina or ‘LSA’), like users of spoken languages, use a systematic set of linguistic formats and practices to indicate troubles of signing, seeing and understanding. The general aim of this article is to provide a general overview of the different visual-gestural linguistic patterns of other-initiated repair sequences in LSA. It also describes the quantitative distribution of other-initiated repair formats based on a collection of 213 cases. It describes the multimodal components of open and restricted types of repair initiators, and reports a previously undescribed implicit practice to initiate repair in LSA in comparison to explicitly produced formats. Part of a special issue presenting repair systems across a range of languages, this article contributes to a better understanding of the phenomenon of other-initiated repair in terms of visual and gestural practices in human interaction in both signed and spoken languages
  • Marslen-Wilsen, W., & Tyler, L. K. (Eds.). (1980). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.1 1980. Nijmegen: MPI for Psycholinguistics.
  • Martin, A. E., & McElree, B. (2008). A content-addressable pointer mechanism underlies comprehension of verb-phrase ellipsis. Journal of Memory and Language, 58(3), 879-906. doi:10.1016/j.jml.2007.06.010.

    Abstract

    Interpreting a verb-phrase ellipsis (VP ellipsis) requires accessing an antecedent in memory, and then integrating a representation of this antecedent into the local context. We investigated the online interpretation of VP ellipsis in an eye-tracking experiment and four speed–accuracy tradeoff experiments. To investigate whether the antecedent for a VP ellipsis is accessed with a search or direct-access retrieval process, Experiments 1 and 2 measured the effect of the distance between an ellipsis and its antecedent on the speed and accuracy of comprehension. Accuracy was lower with longer distances, indicating that interpolated material reduced the quality of retrieved information about the antecedent. However, contra a search process, distance did not affect the speed of interpreting ellipsis. This pattern suggests that antecedent representations are content-addressable and retrieved with a direct-access process. To determine whether interpreting ellipsis involves copying antecedent information into the ellipsis site, Experiments 3–5 manipulated the length and complexity of the antecedent. Some types of antecedent complexity lowered accuracy, notably, the number of discourse entities in the antecedent. However, neither antecedent length nor complexity affected the speed of interpreting the ellipsis. This pattern is inconsistent with a copy operation, and it suggests that ellipsis interpretation may involve a pointer to extant structures in memory.
  • Martin, A. E. (2016). Language processing as cue integration: Grounding the psychology of language in perception and neurophysiology. Frontiers in Psychology, 7: 120. doi:10.3389/fpsyg.2016.00120.

    Abstract

    I argue that cue integration, a psychophysiological mechanism from vision and multisensory perception, offers a computational linking hypothesis between psycholinguistic theory and neurobiological models of language. I propose that this mechanism, which incorporates probabilistic estimates of a cue's reliability, might function in language processing from the perception of a phoneme to the comprehension of a phrase structure. I briefly consider the implications of the cue integration hypothesis for an integrated theory of language that includes acquisition, production, dialogue and bilingualism, while grounding the hypothesis in canonical neural computation.
  • Matić, D., Hammond, J., & Van Putten, S. (2016). Left-dislocation, sentences and clauses in Avatime, Tundra Yukaghir and Whitesands. In J. Fleischhauer, A. Latrouite, & R. Osswald (Eds.), Exploring the Syntax-Semantics Interface. Festschrift for Robert D. Van Valin, Jr. (pp. 339-367). Düsseldorf: Düsseldorf University Press.
  • Matić, D. (2016). Tag questions and focus markers: Evidence from the Tompo dialect of Even. In M. M. J. Fernandez-Vest, & R. D. Van Valin Jr. (Eds.), Information structure and spoken language in a cross-linguistic perspective (pp. 167-190). Berlin: Mouton de Gruyter.
  • McCafferty, S. G., & Gullberg, M. (Eds.). (2008). Gesture and SLA: Toward an integrated approach [Special Issue]. Studies in Second Language Acquisition, 30(2).
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., Eisner, F., & Norris, D. (2016). When brain regions talk to each other during speech processing, what are they talking about? Commentary on Gow and Olson (2015). Language, Cognition and Neuroscience, 31(7), 860-863. doi:10.1080/23273798.2016.1154975.

    Abstract

    This commentary on Gow and Olson [2015. Sentential influences on acoustic-phonetic processing: A Granger causality analysis of multimodal imaging data. Language, Cognition and Neuroscience. doi:10.1080/23273798.2015.1029498] questions in three ways their conclusion that speech perception is based on interactive processing. First, it is not clear that the data presented by Gow and Olson reflect normal speech recognition. Second, Gow and Olson's conclusion depends on still-debated assumptions about the functions performed by specific brain regions. Third, the results are compatible with feedforward models of speech perception and appear inconsistent with models in which there are online interactions about phonological content. We suggest that progress in the neuroscience of speech perception requires the generation of testable hypotheses about the function(s) performed by inter-regional connections
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • Meyer, A. S., Huettig, F., & Levelt, W. J. M. (2016). Same, different, or closely related: What is the relationship between language production and comprehension? Journal of Memory and Language, 89, 1-7. doi:10.1016/j.jml.2016.03.002.
  • Meyer, A. S., & Huettig, F. (Eds.). (2016). Speaking and Listening: Relationships Between Language Production and Comprehension [Special Issue]. Journal of Memory and Language, 89.
  • Meyer, A. S., Ouellet, M., & Häcker, C. (2008). Parallel processing of objects in a naming task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 982-987. doi:10.1037/0278-7393.34.4.982.

    Abstract

    The authors investigated whether speakers who named several objects processed them sequentially or in parallel. Speakers named object triplets, arranged in a triangle, in the order left, right, and bottom object. The left object was easy or difficult to identify and name. During the saccade from the left to the right object, the right object shown at trial onset (the interloper) was replaced by a new object (the target), which the speakers named. Interloper and target were identical or unrelated objects, or they were conceptually unrelated objects with the same name (e.g., bat [animal] and [baseball] bat). The mean duration of the gazes to the target was shorter when interloper and target were identical or had the same name than when they were unrelated. The facilitatory effects of identical and homophonous interlopers were significantly larger when the left object was easy to process than when it was difficult to process. This interaction demonstrates that the speakers processed the left and right objects in parallel.
  • Meyer, A. S., Sleiderink, A. M., & Levelt, W. J. M. (1998). Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. doi:10.1016/S0010-0277(98)00009-2.

    Abstract

    Eye movements have been shown to reflect word recognition and language comprehension processes occurring during reading and auditory language comprehension. The present study examines whether the eye movements speakers make during object naming similarly reflect speech planning processes. In Experiment 1, speakers named object pairs saying, for instance, 'scooter and hat'. The objects were presented as ordinary line drawings or with partly dele:ed contours and had high or low frequency names. Contour type and frequency both significantly affected the mean naming latencies and the mean time spent looking at the objects. The frequency effects disappeared in Experiment 2, in which the participants categorized the objects instead of naming them. This suggests that the frequency effects of Experiment 1 arose during lexical retrieval. We conclude that eye movements during object naming indeed reflect linguistic planning processes and that the speakers' decision to move their eyes from one object to the next is contingent upon the retrieval of the phonological form of the object names.
  • Michalareas, G., Vezoli, J., Van Pelt, S., Schoffelen, J.-M., Kennedy, H., & Fries, P. (2016). Alpha-Beta and Gamma Rhythms Subserve Feedback and Feedforward Influences among Human Visual Cortical Areas. Neuron, 82(2), 384-397. doi:10.1016/j.neuron.2015.12.018.

    Abstract

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and we correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral- and dorsal-stream visual areas are differentially affected by inter-areal influences in the alpha-beta band.

Share this page