Publications

Displaying 301 - 400 of 633
  • Levelt, W. J. M. (2001). The architecture of normal spoken language use. In G. Gupta (Ed.), Cognitive science: Issues and perspectives (pp. 457-473). New Delhi: Icon Publications.
  • Levelt, W. J. M. (1996). Preface. In W. J. M. Levelt (Ed.), Advanced psycholinguistics: A bressanone perspective for Giovanni B. Flores d'Arcais (pp. VII-IX). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levelt, W. J. M. (1996). Hoedt u voor neurolinguïstisch programmeren. Actieblad tegen de kwakzalverij, 107, 12-14.
  • Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (2002). A theory of lexical access in speech production. In G. T. Altmann (Ed.), Psycholinguistics: critical concepts in psychology (pp. 278-377). London: Routledge.
  • Levelt, W. J. M. (2001). De vlieger die (onverwacht) wel opgaat. Natuur & Techniek, 69(6), 60.
  • Levelt, W. J. M. (2001). Defining dyslexia. Science, 292, 1300-1301.
  • Levelt, W. J. M. (1981). Déjà vu? Cognition, 10, 187-192. doi:10.1016/0010-0277(81)90044-5.
  • Levelt, W. J. M. (1996). Foreword. In T. Dijkstra, & K. De Smedt (Eds.), Computational psycholinguistics (pp. ix-xi). London: Taylor & Francis.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M. (1986). Herdenking van Joseph Maria Franciscus Jaspars (16 maart 1934 - 31 juli 1985). In Jaarboek 1986 Koninklijke Nederlandse Akademie van Wetenschappen (pp. 187-189). Amsterdam: North Holland.
  • Levelt, W. J. M., & Maassen, B. (1981). Lexical search and order of mention in sentence production. In W. Klein, & W. J. M. Levelt (Eds.), Crossing the boundaries in linguistics (pp. 221-252). Dordrecht: Reidel.
  • Levelt, W. J. M. (1996). Linguistic intuitions and beyond. In W. J. M. Levelt (Ed.), Advanced psycholinguistics: A Bressanone retrospective for Giovanni B. Floris d'Arcais (pp. 31-35). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levelt, W. J. M. (1996). Perspective taking and ellipsis in spatial descriptions. In P. Bloom, M. A. Peterson, L. Nadel, & M. F. Garrett (Eds.), Language and space (pp. 77-107). Cambridge, MA: MIT Press.
  • Levelt, W. J. M. (2001). Relations between speech production and speech perception: Some behavioral and neurological observations. In E. Dupoux (Ed.), Language, brain and cognitive development: Essays in honour of Jacques Mehler (pp. 241-256). Cambridge, MA: MIT Press.
  • Levelt, W. J. M. (2001). Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98, 13464-13471. doi:10.1073/pnas.231459498.

    Abstract

    A core operation in speech production is the preparation of words from a semantic base. The theory of lexical access reviewed in this article covers a sequence of processing stages beginning with the speaker’s focusing on a target concept and ending with the initiation of articulation. The initial stages of preparation are concerned with lexical selection, which is zooming in on the appropriate lexical item in the mental lexicon. The following stages concern form encoding, i.e., retrieving a word’s morphemic phonological codes, syllabifying the word, and accessing the corresponding articulatory gestures. The theory is based on chronometric measurements of spoken word production, obtained, for instance, in picture-naming tasks. The theory is largely computationally implemented. It provides a handle on the analysis of multiword utterance production as well as a guide to the analysis and design of neuroimaging studies of spoken utterance production.
  • Levelt, W. J. M. (1981). The speaker's linearization problem [and Discussion]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 295, 305-315. doi:10.1098/rstb.1981.0142.

    Abstract

    The process of speaking is traditionally regarded as a mapping of thoughts (intentions, feelings, etc.) onto language. One requirement that this mapping has to meet is that the units of information to be expressed be strictly ordered. The channel of speech largely prohibits the simultaneous expression of multiple propositions: the speaker has a linearization problem - that is, a linear order has to be determined over any knowledge structure to be formulated. This may be relatively simple if the informational structure has itself an intrinsic linear arrangement, as often occurs with event structures, but it requires special procedures if the structure is more complex, as is often the case in two- or three-dimensional spatial patterns. How, for instance, does a speaker proceed in describing his home, or the layout of his town? Two powerful constraints on linearization derive, on the one hand, from 'mutual knowledge' and, on the other, from working memory limitations. Mutual knowledge may play a role in that the listener can be expected to derive different implicatures from different orderings (compare 'she married and became pregnant' with 'she became pregnant and married'). Mutual knowledge determinants of linearization are essentially pragmatic and cultural, and dependent on the content of discourse. Working memory limitations affect linearization in that a speaker's linearization strategy will minimize memory load during the process of formulating. A multidimensional structure is broken up in such a way that the number of 'return addresses' to be kept in memory will be minimized. This is attained by maximizing the connectivity of the discourse, and by backtracking to stored addresses in a first-in-last-out fashion. These memory determinants of linearization are presumably biological, and independent of the domain of discourse. An important question is whether the linearization requirement is enforced by the oral modality of speech or whether it is a deeper modality-independent property of language use.
  • Levelt, W. J. M. (1996). Waar komen gesproken woorden vandaan? De Psycholoog, 31, 434-437.
  • Levelt, W. J. M. (2001). Woorden ophalen. Natuur en Techniek, 69(10), 74.
  • Levelt, W. J. M. (1986). Zur sprachlichen Abbildung des Raumes: Deiktische und intrinsische Perspektive. In H. Bosshardt (Ed.), Perspektiven auf Sprache. Interdisziplinäre Beiträge zum Gedenken an Hans Hörmann (pp. 187-211). Berlin: De Gruyter.
  • Levinson, S. C., Kita, S., Haun, D. B. M., & Rasch, B. H. (2002). Returning the tables: Language affects spatial reasoning. Cognition, 84(2), 155-188. doi:10.1016/S0010-0277(02)00045-8.

    Abstract

    Li and Gleitman (Turning the tables: language and spatial reasoning. Cognition, in press) seek to undermine a large-scale cross-cultural comparison of spatial language and cognition which claims to have demonstrated that language and conceptual coding in the spatial domain covary (see, for example, Space in language and cognition: explorations in linguistic diversity. Cambridge: Cambridge University Press, in press; Language 74 (1998) 557): the most plausible interpretation is that different languages induce distinct conceptual codings. Arguing against this, Li and Gleitman attempt to show that in an American student population they can obtain any of the relevant conceptual codings just by varying spatial cues, holding language constant. They then argue that our findings are better interpreted in terms of ecologically-induced distinct cognitive styles reflected in language. Linguistic coding, they argue, has no causal effects on non-linguistic thinking – it simply reflects antecedently existing conceptual distinctions. We here show that Li and Gleitman did not make a crucial distinction between frames of spatial reference relevant to our line of research. We report a series of experiments designed to show that they have, as a consequence, misinterpreted the results of their own experiments, which are in fact in line with our hypothesis. Their attempts to reinterpret the large cross-cultural study, and to enlist support from animal and infant studies, fail for the same reasons. We further try to discern exactly what theory drives their presumption that language can have no cognitive efficacy, and conclude that their position is undermined by a wide range of considerations.
  • Levinson, S. C. (2006). Parts of the body in Yélî Dnye, the Papuan language of Rossel Island. Language Sciences, 28(2-3), 221-240. doi:10.1016/j.langsci.2005.11.007.

    Abstract

    This paper describes the terminology used to describe parts of the body in Ye´lıˆ Dnye, the Papuan language of Rossel Island (Papua New Guinea). The terms are nouns, which display complex patterns of suppletion in possessive and locative uses. Many of the terms are compounds, many unanalysable. Semantically, visible body parts divide into three main types: (i) a partonomic subsystem dividing the body into nine major parts: head, neck, two upper limbs, trunk, two upper legs, two lower legs, (ii) designated surfaces (e.g. ‘lower belly’), (iii) collections of surface features (‘face’), (iv) taxonomic subsystems (e.g. ‘big toe’ being a kind of ‘toe’). With regards to (i), the lack of any designation for ‘foot’ or ‘hand’ is notable, as is the absence of a term for ‘leg’ as a whole (although this is a lexical not a conceptual gap, as shown by the alternate taboo vocabulary). Ye´lıˆ Dnye body part terms do not have major extensions to other domains (e.g. spatial relators). Indeed, a number of the terms are clearly borrowed from outside human biology (e.g. ‘wing butt’ for shoulder).
  • Levinson, S. C., & Wilkins, D. P. (2006). Patterns in the data: Towards a semantic typology of spatial description. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 512-552). Cambridge: Cambridge University Press.
  • Levinson, S. C., & Wilkins, D. P. (2006). The background to the study of the language of space. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 1-23). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2006). The language of space in Yélî Dnye. In S. C. Levinson, & D. P. Wilkins (Eds.), Grammars of space: Explorations in cognitive diversity (pp. 157-203). Cambridge: Cambridge University Press.
  • Levinson, S. C. (2002). Time for a linguistic anthropology of time. Current Anthropology, 43(4), S122-S123. doi:10.1086/342214.
  • Levinson, S. C. (2001). Motion Verb Stimulus (Moverb) version 2. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 9-13). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3513706.

    Abstract

    How do languages express ideas of movement, and how do they package different components of this domain, such as manner and path of motion? This task uses one large set of stimuli to gain knowledge of certain key aspects of motion verb meanings in the target language, and expands the investigation beyond simple verbs (e.g., go) to include the semantics of motion predications complete with adjuncts (e.g., go across something). Consultants are asked to view and briefly describe 96 animations of a few seconds each. The task is designed to get linguistic elicitations of motion predications under contrastive comparison with other animations in the same set. Unlike earlier tasks, the stimuli focus on inanimate moving items or “figures” (in this case, a ball).
  • Levinson, S. C. (2001). Covariation between spatial language and cognition. In M. Bowerman, & S. C. Levinson (Eds.), Language acquisition and conceptual development (pp. 566-588). Cambridge: Cambridge University Press.
  • Levinson, S. C., Kita, S., & Ozyurek, A. (2001). Demonstratives in context: Comparative handicrafts. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 52-54). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874663.

    Abstract

    Demonstratives (e.g., words such as this and that in English) pivot on relationships between the item being talked about, and features of the speech act situation (e.g., where the speaker and addressee are standing or looking). However, they are only rarely investigated multi-modally, in natural language contexts. This task is designed to build a video corpus of cross-linguistically comparable discourse data for the study of “deixis in action”, while simultaneously supporting the investigation of joint attention as a factor in speaker selection of demonstratives. In the task, two or more speakers are asked to discuss and evaluate a group of similar items (e.g., examples of local handicrafts, tools, produce) that are placed within a relatively defined space (e.g., on a table). The task can additionally provide material for comparison of pointing gesture practices.
  • Levinson, S. C. (2002). Appendix to the 2002 Supplement, version 1, for the “Manual” for the field season 2001. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 62-64). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2001). “Time and space” questionnaire for “space in thinking” subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 14-20). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levinson, S. C. (2006). Cognition at the heart of human interaction. Discourse Studies, 8(1), 85-93. doi:10.1177/1461445606059557.

    Abstract

    Sometimes it is thought that there are serious differences between theories of discourse that turn on the role of cognition in the theory. This is largely a misconception: for example, with its emphasis on participants’ own understandings, its principles of recipient design and projection, Conversation Analysis is hardly anti-cognitive. If there are genuine disagreements they rather concern a preference for ‘lean’ versus ‘rich’ metalanguages and different methodologies. The possession of a multi-levelled model, separating out what the individual brings to interaction from the emergent properties of interaction, would make it easier to resolve some of these issues. Meanwhile, these squabbles on the margins distract us from a much more central and more interesting issue: is there a very special cognition-for-interaction, which underlies and underpins all language and discourse? Prime facie evidence suggests that there is, and different approaches can contribute to our understanding of it.
  • Levinson, S. C. (2006). Introduction: The evolution of culture in a microcosm. In S. C. Levinson, & P. Jaisson (Eds.), Evolution and culture: A Fyssen Foundation Symposium (pp. 1-41). Cambridge: MIT Press.
  • Levinson, S. C. (2006). Matrilineal clans and kin terms on Rossel Island. Anthropological Linguistics, 48, 1-43.

    Abstract

    Yélî Dnye, the language of Rossel Island, Louisiade archipelago, Papua New Guinea, is a non-Austronesian isolate of considerable interest for the prehistory of the area. The kin term, clan, and kinship systems have some superficial similarities with surrounding Austronesian ones, but many underlying differences. The terminology, here properly described for the first time, is highly complex, and seems adapted to a dual descent system, with Crow-type skewing reflecting matrilineal descent, but a system of reciprocals also reflecting the "unity of the patriline." It may be analyzed in three mutually consistent ways: as a system of classificatory reciprocals, as a clan-based sociocentric system, and as collapses and skewings across a genealogical net. It makes an interesting contrast to the Trobriand system, and suggests that the alternative types of account offered by Edmund Leach and Floyd Lounsbury for the Trobriand system both have application to the Rossel system. The Rossel system has features (e.g., patrilineal biases, dual descent, collective [dyadic] kin terms, terms for alternating generations) that may be indicative of pre-Austronesian social systems of the area
  • Levinson, S. C. (2006). Language in the 21st century. Language, 82, 1-2.
  • Levinson, S. C. (1996). Frames of reference and Molyneux's question: Cross-linguistic evidence. In P. Bloom, M. Peterson, L. Nadel, & M. Garrett (Eds.), Language and space (pp. 109-169). Cambridge, MA: MIT press.
  • Levinson, S. C. (2001). Maxim. In S. Duranti (Ed.), Key terms in language and culture (pp. 139-142). Oxford: Blackwell.
  • Levinson, S. C., Enfield, N. J., & Senft, G. (2001). Kinship domain for 'space in thinking' subproject. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 85-88). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874655.
  • Levinson, S. C. (2002). Landscape terms and place names in Yélî Dnye, the language of Rossel Island, PNG. In S. Kita (Ed.), 2002 Supplement (version 3) for the “Manual” for the field season 2001 (pp. 8-13). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (1996). Language and space. Annual Review of Anthropology, 25, 353-382. doi:10.1146/annurev.anthro.25.1.353.

    Abstract

    This review describes some recent, unexpected findings concerning variation in spatial language across cultures, and places them in the context of the general anthropology of space on the one hand, and theories of spatial cognition in the cognitive sciences on the other. There has been much concern with the symbolism of space in anthropological writings, but little on concepts of space in practical activities. This neglect of everyday spatial notions may be due to unwitting ethnocentrism, the assumption in Western thinking generally that notions of space are universally of a single kind. Recent work shows that systems of spatial reckoning and description can in fact be quite divergent across cultures, linguistic differences correlating with distinct cognitive tendencies. This unexpected cultural variation raises interesting questions concerning the relation between cultural and linguistic concepts and the biological foundations of cognition. It argues for more sophisticated models relating culture and cognition than we currently have available.
  • Levinson, S. C., & Wittenburg, P. (2001). Language as cultural heritage - Promoting research and public awareness on the Internet. In J. Renn (Ed.), ECHO - An Infrastructure to Bring European Cultural Heritage Online (pp. 104-111). Berlin: Max Planck Institute for the History of Science.

    Abstract

    The ECHO proposal aims to bring to life the cultural heritage of Europe, through internet technology that encourages collaboration across the Humanities disciplines which interpret it – at the same time making all this scholarship accessible to the citizens of Europe. An essential part of the cultural heritage of Europe is the diverse set of languages used on the continent, in their historical, literary and spoken forms. Amongst these are the ‘hidden languages’ used by minorities but of wide interest to the general public. We take the 18 Sign Languages of the EEC – the natural languages of the deaf - as an example. Little comparative information about these is available, despite their special scientific importance, the widespread public interest and the policy implications. We propose a research project on these languages based on placing fully annotated digitized moving images of each of these languages on the internet. This requires significant development of multi-media technology which would allow distributed annotation of a central corpus, together with the development of special search techniques. The technology would have widespread application to all cultural performances recorded as sound plus moving images. Such a project captures in microcosm the essence of the ECHO proposal: cultural heritage is nothing without the humanities research which contextualizes and gives it comparative assessment; by marrying information technology to humanities research, we can bring these materials to a wider public while simultaneously boosting Europe as a research area.
  • Levinson, S. C., Kita, S., & Enfield, N. J. (2001). Locally-anchored narrative. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 147). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874660.

    Abstract

    As for 'Locally-anchored spatial gestures task, version 2', a major goal of this task is to elicit locally-anchored spatial gestures across different cultures. “Locally-anchored spatial gestures” are gestures that are roughly oriented to the actual geographical direction of referents. Rather than set up an interview situation, this task involves recording informal, animated narrative delivered to a native-speaker interlocutor. Locally-anchored gestures produced in such narrative are roughly comparable to those collected in the interview task. The data collected can also be used to investigate a wide range of other topics.
  • Levinson, S. C. (1996). Introduction to part II. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 133-144). Cambridge: Cambridge University Press.
  • Levinson, S. C. (1996). Relativity in spatial conception and description. In J. J. Gumperz, & S. C. Levinson (Eds.), Rethinking linguistic relativity (pp. 177-202). Cambridge University Press.
  • Levinson, S. C. (1981). The essential inadequacies of speech act models of dialogue. In H. Parret, M. Sbisà, & J. Verscheuren (Eds.), Possibilities and limitations of pragmatics: Proceedings of the Conference on Pragmatics, Urbino, July 8–14, 1979 (pp. 473-492). Amsterdam: John Benjamins.
  • Levinson, S. C. (1981). Some pre-observations on the modelling of dialogue. Discourse Processes, 4(2), 93-116. doi:10.1080/01638538109544510.

    Abstract

    Focuses on the pre-observations on the modeling of dialogue. Assumptions that underlie speech act models of dialogue; Identifiability of utterance units corresponding to unit acts; Capacity of the models to model the actual properties of natural dialogue.
  • Levinson, S. C. (2001). Space: Linguistic expression. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 22 (pp. 14749-14752). Oxford: Pergamon.
  • Levinson, S. C. (2001). Place and space in the sculpture of Anthony Gormley - An anthropological perspective. In S. D. McElroy (Ed.), Some of the facts (pp. 68-109). St Ives: Tate Gallery.
  • Levinson, S. C. (2001). Pragmatics. In N. Smelser, & P. Baltes (Eds.), International Encyclopedia of Social and Behavioral Sciences: Vol. 17 (pp. 11948-11954). Oxford: Pergamon.
  • Levinson, S. C., & Enfield, N. J. (2001). Preface and priorities. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 3). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C., & Senft, G. (1996). Zur Semantik der Verben INTRARE und EXIRE in verschieden Sprachen. In Jahrbuch der Max-Planck-Gesellschaft 1996 (pp. 340-344). München: Generalverwaltung der Max-Planck-Gesellschaft München.
  • Lind, J., Persson, J., Ingvar, M., Larsson, A., Cruts, M., Van Broeckhoven, C., Adolfsson, R., Bäckman, L., Nilsson, L.-G., Petersson, K. M., & Nyberg, L. (2006). Reduced functional brain activity response in cognitively intact apolipoprotein E ε4 carriers. Brain, 129(5), 1240-1248. doi:10.1093/brain/awl054.

    Abstract

    The apolipoprotein E {varepsilon}4 (APOE {varepsilon}4) is the main known genetic risk factor for Alzheimer's disease. Genetic assessments in combination with other diagnostic tools, such as neuroimaging, have the potential to facilitate early diagnosis. In this large-scale functional MRI (fMRI) study, we have contrasted 30 APOE {varepsilon}4 carriers (age range: 49–74 years; 19 females), of which 10 were homozygous for the {varepsilon}4 allele, and 30 non-carriers with regard to brain activity during a semantic categorization task. Test groups were closely matched for sex, age and education. Critically, both groups were cognitively intact and thus symptom-free of Alzheimer's disease. APOE {varepsilon}4 carriers showed reduced task-related responses in the left inferior parietal cortex, and bilaterally in the anterior cingulate region. A dose-related response was observed in the parietal area such that diminution was most pronounced in homozygous compared with heterozygous carriers. In addition, contrasts of processing novel versus familiar items revealed an abnormal response in the right hippocampus in the APOE {varepsilon}4 group, mainly expressed as diminished sensitivity to the relative novelty of stimuli. Collectively, these findings indicate that genetic risk translates into reduced functional brain activity, in regions pertinent to Alzheimer's disease, well before alterations can be detected at the behavioural level.
  • Liszkowski, U. (2006). Infant pointing at twelve months: Communicative goals, motives, and social-cognitive abilities. In N. J. Enfield, & S. C. Levinson (Eds.), Roots of human sociality: culture, cognition and interaction (pp. 153-178). New York: Berg.
  • Liszkowski, U., Carpenter, M., Striano, T., & Tomasello, M. (2006). Twelve- and 18-month-olds point to provide information for others. JOURNAL OF COGNITION AND DEVELOPMENT, 7, 173-187. doi:10.1207/s15327647jcd0702_2.

    Abstract

    Classically, infants are thought to point for 2 main reasons: (a) They point imperatively when they want an adult to do something for them (e.g., give them something; “Juice!”), and (b) they point declaratively when they want an adult to share attention with them to some interesting event or object (“Look!”). Here we demonstrate the existence of another motive for infants' early pointing gestures: to inform another person of the location of an object that person is searching for. This informative motive for pointing suggests that from very early in ontogeny humans conceive of others as intentional agents with informational states and they have the motivation to provide such information communicatively
  • Lloyd, S. E., Pearce, S. H. S., Fisher, S. E., Steinmeyer, K., Schwappach, B., Scheinman, S. J., Harding, B., Bolino, A., Devoto, M., Goodyer, P., Rigden, S. P. A., Wrong, O., Jentsch, T. J., Craig, I. W., & Thakker, R. V. (1996). A common molecular basis for three inherited kidney stone diseases [Letter to Nature]. Nature, 379, 445 -449. doi:10.1038/379445a0.

    Abstract

    Kidney stones (nephrolithiasis), which affect 12% of males and 5% of females in the western world, are familial in 45% of patients and are most commonly associated with hypercalciuria. Three disorders of hypercalciuric nephrolithiasis (Dent's disease, X-linked recessive nephrolithiasis (XRN), and X-linked recessive hypophosphataemic rickets (XLRH)) have been mapped to Xp11.22 (refs 5-7). A microdeletion in one Dent's disease kindred allowed the identification of a candidate gene, CLCN5 (refs 8,9) which encodes a putative renal chloride channel. Here we report the investigation of 11 kindreds with these renal tubular disorders for CLCN5 abnormalities; this identified three nonsense, four missense and two donor splice site mutations, together with one intragenic deletion and one microdeletion encompassing the entire gene. Heterologous expression of wild-type CLCN5 in Xenopus oocytes yielded outwardly rectifying chloride currents, which were either abolished or markedly reduced by the mutations. The common aetiology for Dent's disease, XRN and XLRH indicates that CLCN5 may be involved in other renal tubular disorders associated with kidney stones
  • Maess, B., Friederici, A. D., Damian, M., Meyer, A. S., & Levelt, W. J. M. (2002). Semantic category interference in overt picture naming: Sharpening current density localization by PCA. Journal of Cognitive Neuroscience, 14(3), 455-462. doi:10.1162/089892902317361967.

    Abstract

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixedcategory items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixedcategory condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2006). Covariation and quantifier polarity: What determines causal attribution in vignettes? Cognition, 99(1), 35-51. doi:10.1016/j.cognition.2004.12.004.

    Abstract

    Tests of causal attribution often use verbal vignettes, with covariation information provided through statements quantified with natural language expressions. The effect of covariation information has typically been taken to show that set size information affects attribution. However, recent research shows that quantifiers provide information about discourse focus as well as covariation information. In the attribution literature, quantifiers are used to depict covariation, but they confound quantity and focus. In four experiments, we show that focus explains all (Experiment 1) or some (Experiments 2, 3 and 4) of the impact of covariation information on the attributions made, confirming the importance of the confound. Attribution experiments using vignettes that present covariation information with natural language quantifiers may overestimate the impact of set size information, and ignore the impact of quantifier-induced focus.
  • Majid, A. (2006). Body part categorisation in Punjabi. Language Sciences, 28(2-3), 241-261. doi:10.1016/j.langsci.2005.11.012.

    Abstract

    A key question in categorisation is to what extent people categorise in the same way, or differently. This paper examines categorisation of the body in Punjabi, an Indo-European language spoken in Pakistan and India. First, an inventory of body part terms is presented, illustrating how Punjabi speakers segment and categorise the body. There are some noteworthy terms in the inventory, which illustrate categories in Punjabi that are unusual when compared to other languages presented in this volume. Second, Punjabi speakers’ conceptualisation of the relationship between body parts is explored. While some body part terms are viewed as being partonomically related, others are viewed as being in a locative relationship. It is suggested that there may be key ways in which languages differ in both the categorisation of the body into parts, and in how these parts are related to one another.
  • Majid, A. (2002). Frames of reference and language concepts. Trends in Cognitive Sciences, 6(12), 503-504. doi:10.1016/S1364-6613(02)02024-7.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2002). The influence of animacy on relative clause processing. Journal of Memory and Language, 47(1), 50-68. doi:10.1006/jmla.2001.2837.

    Abstract

    In previous research it has been shown that subject relative clauses are easier to process than object relative clauses. Several theories have been proposed that explain the difference on the basis of different theoretical perspectives. However, previous research tested relative clauses only with animate protagonists. In a corpus study of Dutch and German newspaper texts, we show that animacy is an important determinant of the distribution of subject and object relative clauses. In two experiments in Dutch, in which the animacy of the object of the relative clause is varied, no difference in reading time is obtained between subject and object relative clauses when the object is inanimate. The experiments show that animacy influences the processing difficulty of relative clauses. These results can only be accounted for by current major theories of relative clause processing when additional assumptions are introduced, and at the same time show that the possibility of semantically driven analysis can be considered as a serious alternative.
  • Mak, W. M., Vonk, W., & Schriefers, H. (2006). Animacy in processing relative clauses: The hikers that rocks crush. Journal of Memory and Language, 54(4), 466-490. doi:10.1016/j.jml.2006.01.001.

    Abstract

    For several languages, a preference for subject relative clauses over object relative clauses has been reported. However, Mak, Vonk, and Schriefers (2002) showed that there is no such preference for relative clauses with an animate subject and an inanimate object. A Dutch object relative clause as …de rots, die de wandelaars beklommen hebben… (‘the rock, that the hikers climbed’) did not show longer reading times than its subject relative clause counterpart …de wandelaars, die de rots beklommen hebben… (‘the hikers, who climbed the rock’). In the present paper, we explore the factors that might contribute to this modulation of the usual preference for subject relative clauses. Experiment 1 shows that the animacy of the antecedent per se is not the decisive factor. On the contrary, in relative clauses with an inanimate antecedent and an inanimate relative-clause-internal noun phrase, the usual preference for subject relative clauses is found. In Experiments 2 and 3, subject and object relative clauses were contrasted in which either the subject or the object was inanimate. The results are interpreted in a framework in which the choice for an analysis of the relative clause is based on the interplay of animacy with topichood and verb semantics. This framework accounts for the commonly reported preference for subject relative clauses over object relative clauses as well as for the pattern of data found in the present experiments.
  • Mangione-Smith, R., Elliott, M. N., Stivers, T., McDonald, L. L., & Heritage, J. (2006). Ruling out the need for antibiotics: Are we sending the right message? Archives of Pediatrics & Adolescent Medicine, 160(9), 945-952.
  • Marlow, A. J., Fisher, S. E., Richardson, A. J., Francks, C., Talcott, J. B., Monaco, A. P., Stein, J. F., & Cardon, L. R. (2002). Investigation of quantitative measures related to reading disability in a large sample of sib-pairs from the UK. Behavior Genetics, 31(2), 219-230. doi:10.1023/A:1010209629021.

    Abstract

    We describe a family-based sample of individuals with reading disability collected as part of a quantitative trait loci (QTL) mapping study. Eighty-nine nuclear families (135 independent sib-pairs) were identified through a single proband using a traditional discrepancy score of predicted/actual reading ability and a known family history. Eight correlated psychometric measures were administered to each sibling, including single word reading, spelling, similarities, matrices, spoonerisms, nonword and irregular word reading, and a pseudohomophone test. Summary statistics for each measure showed a reduced mean for the probands compared to the co-sibs, which in turn was lower than that of the population. This partial co-sib regression back to the mean indicates that the measures are influenced by familial factors and therefore, may be suitable for a mapping study. The variance of each of the measures remained largely unaffected, which is reassuring for the application of a QTL approach. Multivariate genetic analysis carried out to explore the relationship between the measures identified a common factor between the reading measures that accounted for 54% of the variance. Finally the familiality estimates (range 0.32–0.73) obtained for the reading measures including the common factor (0.68) supported their heritability. These findings demonstrate the viability of this sample for QTL mapping, and will assist in the interpretation of any subsequent linkage findings in an ongoing genome scan.
  • Martin, A., & Van Turennout, M. (2002). Searching for the neural correlates of object priming. In L. R. Squire, & D. L. Schacter (Eds.), The Neuropsychology of Memory (pp. 239-247). New York: Guilford Press.
  • Mauner, G., Koenig, J.-P., Melinger, A., & Bienvenue, B. (2002). The lexical source of unexpressed participants and their role in sentence and discourse understanding. In P. Merlo, & S. Stevenson (Eds.), The Lexical Basis of Sentence Processing: Formal, Computational and Experimental Issues (pp. 233-254). Amsterdam: John Benjamins.
  • Mauner, G., Melinger, A., Koenig, J.-P., & Bienvenue, B. (2002). When is schematic participant information encoded: Evidence from eye-monitoring. Journal of Memory and Language, 47(3), 386-406. doi:10.1016/S0749-596X(02)00009-8.

    Abstract

    Two eye-monitoring studies examined when unexpressed schematic participant information specified by verbs is used during sentence processing. Experiment 1 compared the processing of sentences with passive and intransitive verbs hypothesized to introduce or not introduce, respectively, an agent when their main clauses were preceded by either agent-dependent rationale clauses or adverbial clause controls. While there were no differences in the processing of passive clauses following rationale and control clauses, intransitive verb clauses elicited anomaly effects following agent-dependent rationale clauses. To determine whether the source of this immediately available schematic participant information is lexically specified or instead derived solely from conceptual sources associated with verbs, Experiment 2 compared the processing of clauses with passive and middle verbs following rationale clauses (e.g., To raise money for the charity, the vase was/had sold quickly…). Although both passive and middle verb forms denote situations that logically require an agent, middle verbs, which by hypothesis do not lexically specify an agent, elicited longer processing times than passive verbs in measures of early processing. These results demonstrate that participants access and interpret lexically encoded schematic participant information in the process of recognizing a verb.
  • McQueen, J. M., Cutler, A., & Norris, D. (2006). Phonological abstraction in the mental lexicon. Cognitive Science, 30(6), 1113-1126. doi:10.1207/s15516709cog0000_79.

    Abstract

    A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. Listeners interpreted the minimal pair words differently in the second phase according to the training received in the first phase. Therefore, lexically mediated retuning of phoneme perception not only influences categorical decisions about fricatives (Norris, McQueen, & Cutler, 2003), but also benefits recognition of words outside the training set. The observed generalization across words suggests that this retuning occurs prelexically. Therefore, lexical processing involves sublexical phonological abstraction, not only accumulation of acoustic episodes.
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). The dynamic nature of speech perception. Language and Speech, 49(1), 101-112.

    Abstract

    The speech perception system must be flexible in responding to the variability in speech sounds caused by differences among speakers and by language change over the lifespan of the listener. Indeed, listeners use lexical knowledge to retune perception of novel speech (Norris, McQueen, & Cutler, 2003). In that study, Dutch listeners made lexical decisions to spoken stimuli, including words with an ambiguous fricative (between [f] and [s]), in either [f]- or [s]-biased lexical contexts. In a subsequent categorization test, the former group of listeners identified more sounds on an [εf] - [εs] continuum as [f] than the latter group. In the present experiment, listeners received the same exposure and test stimuli, but did not make lexical decisions to the exposure items. Instead, they counted them. Categorization results were indistinguishable from those obtained earlier. These adjustments in fricative perception therefore do not depend on explicit judgments during exposure. This learning effect thus reflects automatic retuning of the interpretation of acoustic-phonetic information.
  • McQueen, J. M., Norris, D., & Cutler, A. (2006). Are there really interactive processes in speech perception? Trends in Cognitive Sciences, 10(12), 533-533. doi:10.1016/j.tics.2006.10.004.
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • Meira, S., & Levinson, S. C. (2001). Topological tasks: General introduction. In S. C. Levinson, & N. J. Enfield (Eds.), Manual for the field season 2001 (pp. 29-51). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.874665.
  • Melinger, A. (2002). Foot structure and accent in Seneca. International Journal of American Linguistics, 68(3), 287-315.

    Abstract

    Argues that the Seneca accent system can be explained more simply and naturally if the foot structure is reanalyzed as trochaic. Determination of the position of the accent by the position and structure of the accented syllable and by the position and structure of the post-tonic syllable; Assignment of the pair of syllables which interact to predict where accent is assigned in different iambic feet.
  • Menenti, L. (2006). L2-L1 word association in bilinguals: Direct evidence. Nijmegen CNS, 1, 17-24.

    Abstract

    The Revised Hierarchical Model (Kroll and Stewart, 1994) assumes that words in a bilingual’s languages have separate word form representations but shared conceptual representations. Two routes lead from an L2 word form to its conceptual representation: the word association route, where concepts are accessed through the corresponding L1 word form, and the concept mediation route, with direct access from L2 to concepts. To investigate word association, we presented proficient late German-Dutch bilinguals with L2 non-cognate word pairs in which the L1 translation of the first word rhymed with the second word (e.g. GRAP (joke) – Witz – FIETS (bike)). If the first word in a pair activated its L1 equivalent, then a phonological priming effect on the second word was expected. Priming was observed in lexical decision but not in semantic decision (living/non-living) on L2 words. In a control group of Dutch native speakers, no priming effect was found. This suggests that proficient bilinguals still make use of their L1 word form lexicon to process L2 in lexical decision.
  • Meyer, A. S., Levelt, W. J. M., & Wissink, M. T. (1996). Een modulair model van zinsproductie. Logopedie, 9(2), 21-31.

    Abstract

    In deze bijdrage wordt een modulair model van zinsproductie besproken. De planningsprocessen, die aan de productie van een zin voorafgaan, kunnen in twee hoofdcomponenten onderverdeeld worden: deconceptualisering (het bedenken van de inhoud van de uiting) en de formulering (het vastleggen van de linguïstische vorm). Het formuleringsproces bestaat weer uit twee componenten, te weten de grammatische en fonologische codering. Ook deze componenten bestaan elk weer uit een aantal subcomponenten. Dit artikel beschrijft wat de specifieke taak van iedere component is, hoe deze uitgevoerd wordt en hoe de componenten samenwerken. Tevens worden enkele belangrijke methoden van taalproductie-onderzoek besproken.
  • Meyer, A. S. (1996). Lexical access in phrase and sentence production: Results from picture-word interference experiments. Journal of Memory and Language, 35, 477-496. doi:doi:10.1006/jmla.1996.0026.

    Abstract

    Four experiments investigated the span of advance planning for phrases and short sentences. Dutch subjects were presented with pairs of objects, which they named using noun-phrase conjunctions (e.g., the translation equivalent of ''the arrow and the bag'') or sentences (''the arrow is next to the bag''). Each display was accompanied by an auditory distracter, which was related in form or meaning to the first or second noun of the utterance or unrelated to both. For sentences and phrases, the mean speech onset time was longer when the distracter was semantically related to the first or second noun and shorter when it was phonologically related to the first noun than when it was unrelated. No phonological facilitation was found for the second noun. This suggests that before utterance onset both target lemmas and the first target form were selected.
  • Miller, M., & Klein, W. (1981). Moral argumentations among children: A case study. Linguistische Berichte, 74, 1-19.
  • Mitterer, H., & Cutler, A. (2006). Speech perception. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 11) (pp. 770-782). Amsterdam: Elsevier.

    Abstract

    The goal of speech perception is understanding a speaker's message. To achieve this, listeners must recognize the words that comprise a spoken utterance. This in turn implies distinguishing these words from other minimally different words (e.g., word from bird, etc.), and this involves making phonemic distinctions. The article summarizes research on the perception of phonemic distinctions, on how listeners cope with the continuity and variability of speech signals, and on how phonemic information is mapped onto the representations of words. Particular attention is paid to theories of speech perception and word recognition.
  • Mitterer, H. (2006). On the causes of compensation for coarticulation: Evidence for phonological mediation. Perception & Psychophysics, 68(7), 1227-1240.

    Abstract

    This study examined whether compensation for coarticulation in fricative–vowel syllables is phonologically mediated or a consequence of auditory processes. Smits (2001a) had shown that compensation occurs for anticipatory lip rounding in a fricative caused by a following rounded vowel in Dutch. In a first experiment, the possibility that compensation is due to general auditory processing was investigated using nonspeech sounds. These did not cause context effects akin to compensation for coarticulation, although nonspeech sounds influenced speech sound identification in an integrative fashion. In a second experiment, a possible phonological basis for compensation for coarticulation was assessed by using audiovisual speech. Visual displays, which induced the perception of a rounded vowel, also influenced compensation for anticipatory lip rounding in the fricative. These results indicate that compensation for anticipatory lip rounding in fricative–vowel syllables is phonologically mediated. This result is discussed in the light of other compensation-for-coarticulation findings and general theories of speech perception.
  • Mitterer, H., Csépe, V., & Blomert, L. (2006). The role of perceptual integration in the recognition of assimilated word forms. Quarterly Journal of Experimental Psychology, 59(8), 1395-1424. doi:10.1080/17470210500198726.

    Abstract

    We investigated how spoken words are recognized when they have been altered by phonological assimilation. Previous research has shown that there is a process of perceptual compensation for phonological assimilations. Three recently formulated proposals regarding the mechanisms for compensation for assimilation make different predictions with regard to the level at which compensation is supposed to occur as well as regarding the role of specific language experience. In the present study, Hungarian words and nonwords, in which a viable and an unviable liquid assimilation was applied, were presented to Hungarian and Dutch listeners in an identification task and a discrimination task. Results indicate that viably changed forms are difficult to distinguish from canonical forms independent of experience with the assimilation rule applied in the utterances. This reveals that auditory processing contributes to perceptual compensation for assimilation, while language experience has only a minor role to play when identification is required.
  • Mitterer, H., Csépe, V., Honbolygo, F., & Blomert, L. (2006). The recognition of phonologically assimilated words does not depend on specific language experience. Cognitive Science, 30(3), 451-479. doi:10.1207/s15516709cog0000_57.

    Abstract

    In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/→[leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation by both Dutch and Hungarian listeners. Two additional experiments showed that this was due to the acoustic properties of the assimilated utterance, confirming earlier reports that phonetic detail is important in compensation for assimilation. Our data indicate that compensation for assimilation can occur without experience with an assimilation rule, in line with phonetic–phonological theories that assume that speech production is influenced by speech-perception abilities.
  • Mitterer, H. (2006). Is vowel normalization independent of lexical processing? Phonetica, 63(4), 209-229. doi:10.1159/000097306.

    Abstract

    Vowel normalization in speech perception was investigated in three experiments. The range of the second formant in a carrier phrase was manipulated and this affected the perception of a target vowel in a compensatory fashion: A low F2 range in the carrier phrase made it more likely that the target vowel was perceived as a front vowel, that is, with a high F2. Recent experiments indicated that this effect might be moderated by the lexical status of the constituents of the carrier phrase. Manipulation of the lexical status in the present experiments, however, did not affect vowel normalization. In contrast, the range of vowels in the carrier phrase did influence vowel normalization. If the carrier phrase consisted of mid-to-high front vowels only, vowel categories shifted only for mid-to-high front vowels. It is argued that these results are a challenge for episodic models of word recognition.
  • Mitterer, H., & Ernestus, M. (2006). Listeners recover /t/s that speakers reduce: Evidence from /t/-lenition in Dutch. Journal of Phonetics, 34(1), 73-103. doi:10.1016/j.wocn.2005.03.003.

    Abstract

    In everyday speech, words may be reduced. Little is known about the consequences of such reductions for spoken word comprehension. This study investigated /t/-lenition in Dutch in two corpus studies and three perceptual experiments. The production studies revealed that /t/-lenition is most likely to occur after [s] and before bilabial consonants. The perception experiments showed that listeners take into account both phonological context, phonetic detail, and the lexical status of the form in the interpretation of codas that may or may not contain a lenited word-final /t/. These results speak against models of word recognition that make hard decisions on a prelexical level.
  • Mortensen, L., Meyer, A. S., & Humphreys, G. W. (2006). Age-related effects on speech production: A review. Language and Cognitive Processes, 21, 238-290. doi:10.1080/01690960444000278.

    Abstract

    In discourse, older adults tend to be more verbose and more disfluent than young adults, especially when the task is difficult and when it places few constraints on the content of the utterance. This may be due to (a) language-specific deficits in planning the content and syntactic structure of utterances or in selecting and retrieving words from the mental lexicon, (b) a general deficit in inhibiting irrelevant information, or (c) the selection of a specific speech style. The possibility that older adults have a deficit in lexical retrieval is supported by the results of picture naming studies, in which older adults have been found to name objects less accurately and more slowly than young adults, and by the results of definition naming studies, in which older adults have been found to experience more tip-of-the-tongue (TOT) states than young adults. The available evidence suggests that these age differences are largely due to weakening of the connections linking word lemmas to phonological word forms, though adults above 70 years of age may have an additional deficit in lemma selection.
  • Müller, O., & Hagoort, P. (2006). Access to lexical information in language comprehension: Semantics before syntax. Journal of Cognitive Neuroscience, 18(1), 84-96. doi:10.1162/089892906775249997.

    Abstract

    The recognition of a word makes available its semantic and
    syntactic properties. Using electrophysiological recordings, we
    investigated whether one set of these properties is available
    earlier than the other set. Dutch participants saw nouns on a
    computer screen and performed push-button responses: In
    one task, grammatical gender determined response hand
    (left/right) and semantic category determined response execution
    (go/no-go). In the other task, response hand depended
    on semantic category, whereas response execution depended
    on gender. During the latter task, response preparation occurred
    on no-go trials, as measured by the lateralized
    readiness potential: Semantic information was used for
    response preparation before gender information inhibited
    this process. Furthermore, an inhibition-related N2 effect
    occurred earlier for inhibition by semantics than for inhibition
    by gender. In summary, electrophysiological measures
    of both response preparation and inhibition indicated that
    the semantic word property was available earlier than the
    syntactic word property when participants read single
    words.
  • Murphy, S. K., Nolan, C. M., Huang, Z., Kucera, K. S., Freking, B. A., Smith, T. P., Leymaster, K. A., Weidman, J. R., & Jirtle, a. R. L. (2006). Callipyge mutation affects gene expression in cis: A potential role for chromatin structure. Genome Research, 16, 340-346. doi:10.1101/gr.4389306.

    Abstract

    Muscular hypertrophy in callipyge sheep results from a single nucleotide substitution located in the genomic interval between the imprinted Delta, Drosophila, Homolog-like 1 (DLK1) and Maternally Expressed Gene 3 (MEG3). The mechanism linking the mutation to muscle hypertrophy is unclear but involves DLK1 overexpression. The mutation is contained within CLPG1 transcripts produced from this region. Herein we show that CLPG1 is expressed prenatally in the hypertrophy-responsive longissimus dorsi muscle by all four possible genotypes, but postnatal expression is restricted to sheep carrying the mutation. Surprisingly, the mutation results in nonimprinted monoallelic transcription of CLPG1 from only the mutated allele in adult sheep, whereas it is expressed biallelically during prenatal development. We further demonstrate that local CpG methylation is altered by the presence of the mutation in longissimus dorsi of postnatal sheep. For 10 CpG sites flanking the mutation, methylation is similar prenatally across genotypes, but doubles postnatally in normal sheep. This normal postnatal increase in methylation is significantly repressed in sheep carrying one copy of the mutation, and repressed even further in sheep with two mutant alleles. The attenuation in methylation status in the callipyge sheep correlates with the onset of the phenotype, continued CLPG1 transcription, and high-level expression of DLK1. In contrast, normal sheep exhibit hypermethylation of this locus after birth and CLPG1 silencing, which coincides with DLK1 transcriptional repression. These data are consistent with the notion that the callipyge mutation inhibits perinatal nucleation of regional chromatin condensation resulting in continued elevated transcription of prenatal DLK1 levels in adult callipyge sheep. We propose a model incorporating these results that can also account for the enigmatic normal phenotype of homozygous mutant sheep.
  • Narasimhan, B., & Gullberg, M. (2006). Perspective-shifts in event descriptions in Tamil child language. Journal of Child Language, 33(1), 99-124. doi:10.1017/S0305000905007191.

    Abstract

    Children are able to take multiple perspectives in talking about entities and events. But the nature of children's sensitivities to the complex patterns of perspective-taking in adult language is unknown. We examine perspective-taking in four- and six-year-old Tamil-speaking children describing placement events, as reflected in the use of a general placement verb (veyyii ‘put’) versus two fine-grained caused posture expressions specifying orientation, either vertical (nikka veyyii ‘make stand’) or horizontal (paDka veyyii ‘make lie’). We also explore whether animacy systematically promotes shifts to a fine-grained perspective. The results show that four- and six-year-olds switch perspectives as flexibly and systematically as adults do. Animacy influences shifts to a fine-grained perspective similarly across age groups. However, unexpectedly, six-year-olds also display greater overall sensitivity to orientation, preferring the vertical over the horizontal caused posture expression. Despite early flexibility, the factors governing the patterns of perspective-taking on events are undergoing change even in later childhood, reminiscent of U-shaped semantic reorganizations observed in children's lexical knowledge. The present study points to the intriguing possibility that mechanisms that operate at the level of semantics could also influence subtle patterns of lexical choice and perspective-shifts.
  • Newbury, D. F., Cleak, J. D., Ishikawa-Brush, Y., Marlow, A. J., Fisher, S. E., Monaco, A. P., Stott, C. M., Merricks, M. J., Goodyer, I. M., Bolton, P. F., Jannoun, L., Slonims, V., Baird, G., Pickles, A., Bishop, D. V. M., Helms., P. J., & The SLI Consortium (2002). A genomewide scan identifies two novel loci involved in specific language impairment. American Journal of Human Genetics, 70(2), 384-398. doi:10.1086/338649.

    Abstract

    Approximately 4% of English-speaking children are affected by specific language impairment (SLI), a disorder in the development of language skills despite adequate opportunity and normal intelligence. Several studies have indicated the importance of genetic factors in SLI; a positive family history confers an increased risk of development, and concordance in monozygotic twins consistently exceeds that in dizygotic twins. However, like many behavioral traits, SLI is assumed to be genetically complex, with several loci contributing to the overall risk. We have compiled 98 families drawn from epidemiological and clinical populations, all with probands whose standard language scores fall ⩾1.5 SD below the mean for their age. Systematic genomewide quantitative-trait–locus analysis of three language-related measures (i.e., the Clinical Evaluation of Language Fundamentals–Revised [CELF-R] receptive and expressive scales and the nonword repetition [NWR] test) yielded two regions, one on chromosome 16 and one on 19, that both had maximum LOD scores of 3.55. Simulations suggest that, of these two multipoint results, the NWR linkage to chromosome 16q is the most significant, with empirical P values reaching 10−5, under both Haseman-Elston (HE) analysis (LOD score 3.55; P=.00003) and variance-components (VC) analysis (LOD score 2.57; P=.00008). Single-point analyses provided further support for involvement of this locus, with three markers, under the peak of linkage, yielding LOD scores >1.9. The 19q locus was linked to the CELF-R expressive-language score and exceeds the threshold for suggestive linkage under all types of analysis performed—multipoint HE analysis (LOD score 3.55; empirical P=.00004) and VC (LOD score 2.84; empirical P=.00027) and single-point HE analysis (LOD score 2.49) and VC (LOD score 2.22). Furthermore, both the clinical and epidemiological samples showed independent evidence of linkage on both chromosome 16q and chromosome 19q, indicating that these may represent universally important loci in SLI and, thus, general risk factors for language impairment.
  • Newbury, D. F., Bonora, E., Lamb, J. A., Fisher, S. E., Lai, C. S. L., Baird, G., Jannoun, L., Slonims, V., Stott, C. M., Merricks, M. J., Bolton, P. F., Bailey, A. J., Monaco, A. P., & International Molecular Genetic Study of Autism Consortium (2002). FOXP2 is not a major susceptibility gene for autism or specific language impairment. American Journal of Human Genetics, 70(5), 1318-1327. doi:10.1086/339931.

    Abstract

    The FOXP2 gene, located on human 7q31 (at the SPCH1 locus), encodes a transcription factor containing a polyglutamine tract and a forkhead domain. FOXP2 is mutated in a severe monogenic form of speech and language impairment, segregating within a single large pedigree, and is also disrupted by a translocation in an isolated case. Several studies of autistic disorder have demonstrated linkage to a similar region of 7q (the AUTS1 locus), leading to the proposal that a single genetic factor on 7q31 contributes to both autism and language disorders. In the present study, we directly evaluate the impact of the FOXP2 gene with regard to both complex language impairments and autism, through use of association and mutation screening analyses. We conclude that coding-region variants in FOXP2 do not underlie the AUTS1 linkage and that the gene is unlikely to play a role in autism or more common forms of language impairment.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). When peanuts fall in love: N400 evidence for the power of discourse. Journal of Cognitive Neuroscience, 18(7), 1098-1111. doi:10.1162/jocn.2006.18.7.1098.

    Abstract

    In linguistic theories of how sentences encode meaning, a distinction is often made between the context-free rule-based combination of lexical–semantic features of the words within a sentence (‘‘semantics’’), and the contributions made by wider context (‘‘pragmatics’’). In psycholinguistics, this distinction has led to the view that listeners initially compute a local, context-independent meaning of a phrase or sentence before relating it to the wider context. An important aspect of such a two-step perspective on interpretation is that local semantics cannot initially be overruled by global contextual factors. In two spoken-language event-related potential experiments, we tested the viability of this claim by examining whether discourse context can overrule the impact of the core lexical–semantic feature animacy, considered to be an innate organizing principle of cognition. Two-step models of interpretation predict that verb–object animacy violations, as in ‘‘The girl comforted the clock,’’ will always perturb the unfolding interpretation process, regardless of wider context. When presented in isolation, such anomalies indeed elicit a clear N400 effect, a sign of interpretive problems. However, when the anomalies were embedded in a supportive context (e.g., a girl talking to a clock about his depression), this N400 effect disappeared completely. Moreover, given a suitable discourse context (e.g., a story about an amorous peanut), animacyviolating predicates (‘‘the peanut was in love’’) were actually processed more easily than canonical predicates (‘‘the peanut was salted’’). Our findings reveal that discourse context can immediately overrule local lexical–semantic violations, and therefore suggest that language comprehension does not involve an initially context-free semantic analysis.
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2006). Individual differences and contextual bias in pronoun resolution: Evidence from ERPs. Brain Research, 1118(1), 155-167. doi:10.1016/j.brainres.2006.08.022.

    Abstract

    Although we usually have no trouble finding the right antecedent for a pronoun, the co-reference relations between pronouns and antecedents in everyday language are often ‘formally’ ambiguous. But a pronoun is only really ambiguous if a reader or listener indeed perceives it to be ambiguous. Whether this is the case may depend on at least two factors: the language processing skills of an individual reader, and the contextual bias towards one particular referential interpretation. In the current study, we used event related brain potentials (ERPs) to explore how both these factors affect the resolution of referentially ambiguous pronouns. We compared ERPs elicited by formally ambiguous and non-ambiguous pronouns that were embedded in simple sentences (e.g., “Jennifer Lopez told Madonna that she had too much money.”). Individual differences in language processing skills were assessed with the Reading Span task, while the contextual bias of each sentence (up to the critical pronoun) had been assessed in a referential cloze pretest. In line with earlier research, ambiguous pronouns elicited a sustained, frontal negative shift relative to non-ambiguous pronouns at the group-level. The size of this effect was correlated with Reading Span score, as well as with contextual bias. These results suggest that whether a reader perceives a formally ambiguous pronoun to be ambiguous is subtly co-determined by both individual language processing skills and contextual bias.
  • Norris, D., Cutler, A., McQueen, J. M., & Butterfield, S. (2006). Phonological and conceptual activation in speech comprehension. Cognitive Psychology, 53(2), 146-193. doi:10.1016/j.cogpsych.2006.03.001.

    Abstract

    We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.
  • Norris, D., McQueen, J. M., & Cutler, A. (2002). Bias effects in facilitatory phonological priming. Memory & Cognition, 30(3), 399-411.

    Abstract

    In four experiments, we examined the facilitation that occurs when spoken-word targets rhyme with preceding spoken primes. In Experiment 1, listeners’ lexical decisions were faster to words following rhyming words (e.g., ramp–LAMP) than to words following unrelated primes (e.g., pink–LAMP). No facilitation was observed for nonword targets. Targets that almost rhymed with their primes (foils; e.g., bulk–SULSH) were included in Experiment 2; facilitation for rhyming targets was severely attenuated. Experiments 3 and 4 were single-word shadowing variants of the earlier experiments. There was facilitation for both rhyming words and nonwords; the presence of foils had no significant influence on the priming effect. A major component of the facilitation in lexical decision appears to be strategic: Listeners are biased to say “yes” to targets that rhyme with their primes, unless foils discourage this strategy. The nonstrategic component of phonological facilitation may reflect speech perception processes that operate prior to lexical access.
  • Norris, D., Butterfield, S., McQueen, J. M., & Cutler, A. (2006). Lexically guided retuning of letter perception. Quarterly Journal of Experimental Psychology, 59(9), 1505-1515. doi:10.1080/17470210600739494.

    Abstract

    Participants made visual lexical decisions to upper-case words and nonwords, and then categorized an ambiguous N–H letter continuum. The lexical decision phase included different exposure conditions: Some participants saw an ambiguous letter “?”, midway between N and H, in N-biased lexical contexts (e.g., REIG?), plus words with unambiguousH(e.g., WEIGH); others saw the reverse (e.g., WEIG?, REIGN). The first group categorized more of the test continuum as N than did the second group. Control groups, who saw “?” in nonword contexts (e.g., SMIG?), plus either of the unambiguous word sets (e.g., WEIGH or REIGN), showed no such subsequent effects. Perceptual learning about ambiguous letters therefore appears to be based on lexical knowledge, just as in an analogous speech experiment (Norris, McQueen, & Cutler, 2003) which showed similar lexical influence in learning about ambiguous phonemes. We argue that lexically guided learning is an efficient general strategy available for exploitation by different specific perceptual tasks.
  • Norris, D., McQueen, J. M., Cutler, A., Butterfield, S., & Kearns, R. (2001). Language-universal constraints on speech segmentation. Language and Cognitive Processes, 16, 637-660. doi:10.1080/01690960143000119.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and any likely location of a word boundary, as cued in the speech signal. The experiments examined cases where the residue was either a CVC syllable with a schwa, or a CV syllable with a lax vowel. Although neither of these syllable contexts is a possible lexical word in English, word-spotting in both contexts was easier than in a context consisting of a single consonant. Two control lexical-decision experiments showed that the word-spotting results reflected the relative segmentation difficulty of the words in different contexts. The PWC appears to be language-universal rather than language-specific.
  • Nyberg, L., Forkstam, C., Petersson, K. M., Cabeza, R., & Ingvar, M. (2002). Brain imaging of human memory systems: Between-systems similarities and within-system differences. Cognitive Brain Research, 13(2), 281-292. doi:10.1016/S0926-6410(02)00052-6.

    Abstract

    There is much evidence for the existence of multiple memory systems. However, it has been argued that tasks assumed to reflect different memory systems share basic processing components and are mediated by overlapping neural systems. Here we used multivariate analysis of PET-data to analyze similarities and differences in brain activity for multiple tests of working memory, semantic memory, and episodic memory. The results from two experiments revealed between-systems differences, but also between-systems similarities and within-system differences. Specifically, support was obtained for a task-general working-memory network that may underlie active maintenance. Premotor and parietal regions were salient components of this network. A common network was also identified for two episodic tasks, cued recall and recognition, but not for a test of autobiographical memory. This network involved regions in right inferior and polar frontal cortex, and lateral and medial parietal cortex. Several of these regions were also engaged during the working-memory tasks, indicating shared processing for episodic and working memory. Fact retrieval and synonym generation were associated with increased activity in left inferior frontal and middle temporal regions and right cerebellum. This network was also associated with the autobiographical task, but not with living/non-living classification, and may reflect elaborate retrieval of semantic information. Implications of the present results for the classification of memory tasks with respect to systems and/or processes are discussed.
  • Nyberg, L., Petersson, K. M., Nilsson, L.-G., Sandblom, J., Åberg, C., & Ingvar, M. (2001). Reactivation of motor brain areas during explicit memory for actions. Neuroimage, 14, 521-528. doi:10.1006/nimg.2001.0801.

    Abstract

    Recent functional brain imaging studies have shown that sensory-specific brain regions that are activated during perception/encoding of sensory-specific information are reactivated during memory retrieval of the same information. Here we used PET to examine whether verbal retrieval of action phrases is associated with reactivation of motor brain regions if the actions were overtly or covertly performed during encoding. Compared to a verbal condition, encoding by means of overt as well as covert activity was associated with differential activity in regions in contralateral somatosensory and motor cortex. Several of these regions were reactivated during retrieval. Common to both the overt and covert conditions was reactivation of regions in left ventral motor cortex and left inferior parietal cortex. A direct comparison of the overt and covert activity conditions showed that activation and reactivation of left dorsal parietal cortex and right cerebellum was specific to the overt condition. These results support the reactivation hypothesis by showing that verbal-explicit memory of actions involves areas that are engaged during overt and covert motor activity.
  • O'Connor, L. (2006). Sobre los predicados complejos en el Chontal de la baja. In A. Oseguera (Ed.), Historia y etnografía entre los Chontales de Oaxaca (pp. 119-161). Oaxaca: Instituto Nacional de Antroplogía e Historia.
  • O'Connor, L. (2006). [Review of the book Toward a cognitive semantics: Concept structuring systems by Leonard Talmy]. Journal of Pragmatics, 38(7), 1126-1134. doi:10.1016/j.pragma.2005.08.007.
  • Ogdie, M. N., Bakker, S. C., Fisher, S. E., Francks, C., Yang, M. H., Cantor, R. M., Loo, S. K., Van der Meulen, E., Pearson, P., Buitelaar, J., Monaco, A., Nelson, S. F., Sinke, R. J., & Smalley, S. L. (2006). Pooled genome-wide linkage data on 424 ADHD ASPs suggests genetic heterogeneity and a common risk locus at 5p13 [Letter to the editor]. Molecular Psychiatry, 11, 5-8. doi:10.1038/sj.mp.4001760.
  • Otake, T., Yoneyama, K., Cutler, A., & van der Lugt, A. (1996). The representation of Japanese moraic nasals. Journal of the Acoustical Society of America, 100, 3831-3842. doi:10.1121/1.417239.

    Abstract

    Nasal consonants in syllabic coda position in Japanese assimilate to the place of articulation of a following consonant. The resulting forms may be perceived as different realizations of a single underlying unit, and indeed the kana orthographies represent them with a single character. In the present study, Japanese listeners' response time to detect nasal consonants was measured. Nasals in coda position, i.e., moraic nasals, were detected faster and more accurately than nonmoraic nasals, as reported in previous studies. The place of articulation with which moraic nasals were realized affected neither response time nor accuracy. Non-native subjects who knew no Japanese, given the same materials with the same instructions, simply failed to respond to moraic nasals which were realized bilabially. When the nasals were cross-spliced across place of articulation contexts the Japanese listeners still showed no significant place of articulation effects, although responses were faster and more accurate to unspliced than to cross-spliced nasals. When asked to detect the phoneme following the (cross-spliced) moraic nasal, Japanese listeners showed effects of mismatch between nasal and context, but non-native listeners did not. Together, these results suggest that Japanese listeners are capable of very rapid abstraction from phonetic realization to a unitary representation of moraic nasals; but they can also use the phonetic realization of a moraic nasal effectively to obtain anticipatory information about following phonemes.

Share this page