Publications

Displaying 1 - 100 of 409
  • Abdel Rahman, R., Van Turennout, M., & Levelt, W. J. M. (2003). Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(5), 850-860. doi:10.1037/0278-7393.29.5.850.

    Abstract

    In the present study, the authors examined with event-related brain potentials whether phonological encoding in picture naming is mediated by basic semantic feature retrieval or proceeds independently. In a manual 2-choice go/no-go task the choice response depended on a semantic classification (animal vs. object) and the execution decision was contingent on a classification of name phonology (vowel vs. consonant). The introduction of a semantic task mixing procedure allowed for selectively manipulating the speed of semantic feature retrieval. Serial and parallel models were tested on the basis of their differential predictions for the effect of this manipulation on the lateralized readiness potential and N200 component. The findings indicate that phonological code retrieval is not strictly contingent on prior basic semantic feature processing.
  • Abdel Rahman, R., & Sommer, W. (2003). Does phonological encoding in speech production always follow the retrieval of semantic knowledge?: Electrophysiological evidence for parallel processing. Cognitive Brain Research, 16(3), 372-382. doi:10.1016/S0926-6410(02)00305-1.

    Abstract

    In this article a new approach to the distinction between serial/contingent and parallel/independent processing in the human cognitive system is applied to semantic knowledge retrieval and phonological encoding of the word form in picture naming. In two-choice go/nogo tasks pictures of objects were manually classified on the basis of semantic and phonological information. An additional manipulation of the duration of the faster and presumably mediating process (semantic retrieval) allowed to derive differential predictions from the two alternative models. These predictions were tested with two event-related brain potentials (ERPs), the lateralized readiness potential (LRP) and the N200. The findings indicate that phonological encoding can proceed in parallel to the retrieval of semantic features. A suggestion is made how to accommodate these findings with models of speech production.
  • Akker, E., & Cutler, A. (2003). Prosodic cues to semantic structure in native and nonnative listening. Bilingualism: Language and Cognition, 6(2), 81-96. doi:10.1017/S1366728903001056.

    Abstract

    Listeners efficiently exploit sentence prosody to direct attention to words bearing sentence accent. This effect has been explained as a search for focus, furthering rapid apprehension of semantic structure. A first experiment supported this explanation: English listeners detected phoneme targets in sentences more rapidly when the target-bearing words were in accented position or in focussed position, but the two effects interacted, consistent with the claim that the effects serve a common cause. In a second experiment a similar asymmetry was observed with Dutch listeners and Dutch sentences. In a third and a fourth experiment, proficient Dutch users of English heard English sentences; here, however, the two effects did not interact. The results suggest that less efficient mapping of prosody to semantics may be one way in which nonnative listening fails to equal native listening.
  • Alario, F.-X., Schiller, N. O., Domoto-Reilly, K., & Caramazza, A. (2003). The role of phonological and orthographic information in lexical selection. Brain and Language, 84(3), 372-398. doi:10.1016/S0093-934X(02)00556-4.

    Abstract

    We report the performance of two patients with lexico-semantic deficits following left MCA CVA. Both patients produce similar numbers of semantic paraphasias in naming tasks, but presented one crucial difference: grapheme-to-phoneme and phoneme-to-grapheme conversion procedures were available only to one of them. We investigated the impact of this availability on the process of lexical selection during word production. The patient for whom conversion procedures were not operational produced semantic errors in transcoding tasks such as reading and writing to dictation; furthermore, when asked to name a given picture in multiple output modalities—e.g., to say the name of a picture and immediately after to write it down—he produced lexically inconsistent responses. By contrast, the patient for whom conversion procedures were available did not produce semantic errors in transcoding tasks and did not produce lexically inconsistent responses in multiple picture-naming tasks. These observations are interpreted in the context of the summation hypothesis (Hillis & Caramazza, 1991), according to which the activation of lexical entries for production would be made on the basis of semantic information and, when available, on the basis of form-specific information. The implementation of this hypothesis in models of lexical access is discussed in detail.
  • Alibali, M. W., Flevares, L. M., & Goldin-Meadow, S. (1997). Assessing knowledge conveyed in gesture: Do teachers have the upper hand? Journal of Educational Psychology, 89(1), 183-193. doi:10.1037/0022-0663.89.1.183.

    Abstract

    Children's gestures can reveal important information about their problem-solving strategies. This study investigated whether the information children express only in gesture is accessible to adults not trained in gesture coding. Twenty teachers and 20 undergraduates viewed videotaped vignettes of 12 children explaining their solutions to equations. Six children expressed the same strategy in speech and gesture, and 6 expressed different strategies. After each vignette, adults described the child's reasoning. For children who expressed different strategies in speech and gesture, both teachers and undergraduates frequently described strategies that children had not expressed in speech. These additional strategies could often be traced
    to the children's gestures. Sensitivity to gesture was comparable for teachers and
    undergraduates. Thus, even without training, adults glean information, not only from children's words but also from their hands.
  • Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.

    Abstract

    At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
  • Allen, S. E. M. (1998). Categories within the verb category: Learning the causative in Inuktitut. Linguistics, 36(4), 633-677.
  • Ameka, F. K. (1999). [Review of M. E. Kropp Dakubu: Korle meets the sea: a sociolinguistic history of Accra]. Bulletin of the School of Oriental and African Studies, 62, 198-199. doi:10.1017/S0041977X0001836X.
  • Ameka, F. K. (1998). Particules énonciatives en Ewe. Faits de langues, 6(11/12), 179-204.

    Abstract

    Particles are little words that speakers use to signal the illocutionary force of utterances and/or express their attitude towards elements of the communicative situation, e.g. the addresses. This paper presents an overview of the classification, meaning and use of utterance particles in Ewe. It argues that they constitute a grammatical word class on functional and distributional grounds. The paper calls for a cross-cultural investigation of particles, especially in Africa, where they have been neglected for far too long.
  • Ameka, F. K. (1999). Partir c'est mourir un peu: Universal and culture specific features of leave taking. RASK International Journal of Language and Communication, 9/10, 257-283.
  • Ameka, F. K. (1999). Spatial information packaging in Ewe and Likpe: A comparative perspective. Frankfurter Afrikanistische Blätter, 11, 7-34.
  • Ameka, F. K. (1995). The linguistic construction of space in Ewe. Cognitive Linguistics, 6(2/3), 139-182. doi:10.1515/cogl.1995.6.2-3.139.

    Abstract

    This paper presents the linguistic means of describing spatial relations in Ewe with particular emphasis on the grammar and meaning of adpositions. Ewe ( N iger-Congo ) has two sets of adpositions: prepositions, which have evolvedfrom verbs, and postpositions which have evolvedfrom nouns. The postpositions create places and are treated äs intrinsic parts or regions of the reference object in a spatial description. The prepositions provide the general orientation of a Figure (located object). It is demonstrated (hat spaiial relations, such äs those encapsulated in "the basic topological prepositions at, in and on" in English (Herskovits 1986: 9), are not encoded in single linguistic elements in Ewe, but are distributed over members of dijferent form classes in a syntagmatic string, The paper explores the r öle of compositionality andits interaction with pragmatics to yield understandings of spatial configurations in such a language where spatial meanings cannot he simply read off one form. The study also examines the diversity among languages in terms of the nature and obligatoriness of the coding of relational and ground Information in spatial constructions. It is argued that the ränge and type of distinctions discussed in the paper must be accountedfor in semantic typology and in the cross-linguistic investigation of spatial language and conceptualisation.
  • Ameka, F. K. (1999). The typology and semantics of complex nominal duplication in Ewe. Anthropological Linguistics, 41, 75-106.
  • Baayen, R. H., Dijkstra, T., & Schreuder, R. (1997). Singulars and Plurals in Dutch: Evidence for a Parallel Dual-Route Model. Journal of Memory and Language, 37(1), 94-117. doi:10.1006/jmla.1997.2509.

    Abstract

    Are regular morphologically complex words stored in the mental lexicon? Answers to this question have ranged from full listing to parsing for every regular complex word. We investigated the roles of storage and parsing in the visual domain for the productive Dutch plural suffix -en.Two experiments are reported that show that storage occurs for high-frequency noun plurals. A mathematical formalization of a parallel dual-route race model is presented that accounts for the patterns in the observed reaction time data with essentially one free parameter, the speed of the parsing route. Parsing for noun plurals appears to be a time-costly process, which we attribute to the ambiguity of -en,a suffix that is predominantly used as a verbal ending. A third experiment contrasted nouns and verbs. This experiment revealed no effect of surface frequency for verbs, but again a solid effect for nouns. Together, our results suggest that many noun plurals are stored in order to avoid the time-costly resolution of the subcategorization conflict that arises when the -ensuffix is attached to nouns.

    Files private

    Request files
  • Baayen, R. H. (1997). The pragmatics of the 'tenses' in biblical Hebrew. Studies in Language, 21(2), 245-285. doi:10.1075/sl.21.2.02baa.

    Abstract

    In this paper, I present an analysis of the so-called tense forms of Biblical Hebrew. While there is fairly broad consensus on the interpretation of the yiqtol tense form, the interpretation of the qdtal tense form has led to considerable controversy. I will argue that the qātal form has no intrinsic semantic value and that it serves a pragmatic function only, namely, signaling to the hearer that the event or state expressed by the verb cannot be tightly integrated into the discourse representation of the hearer, given the speaker's estimate of their common ground.
  • Baayen, R. H., Lieber, R., & Schreuder, R. (1997). The morphological complexity of simplex nouns. Linguistics, 35, 861-877. doi:10.1515/ling.1997.35.5.861.
  • Baayen, R. H., & Lieber, R. (1997). Word frequency distributions and lexical semantics. Computers and the Humanities, 30, 281-291.

    Abstract

    This paper addresses the relation between meaning, lexical productivity, and frequency of use. Using density estimation as a visualization tool, we show that differences in semantic structure can be reflected in probability density functions estimated for word frequency distributions. We call attention to an example of a bimodal density, and suggest that bimodality arises when distributions of well-entrenched lexical tems, which appear to be lognormal, are mixed with distributions of productively reated nonce formations
  • Bailey, A., Hervas, A., Matthews, N., Palferman, S., Wallace, S., Aubin, A., Michelotti, J., Wainhouse, C., Papanikolaou, K., Rutter, M., Maestrini, E., Marlow, A., Weeks, D. E., Lamb, J., Francks, C., Kearsley, G., Scudder, P., Monaco, A. P., Baird, G., Cox, A. and 46 moreBailey, A., Hervas, A., Matthews, N., Palferman, S., Wallace, S., Aubin, A., Michelotti, J., Wainhouse, C., Papanikolaou, K., Rutter, M., Maestrini, E., Marlow, A., Weeks, D. E., Lamb, J., Francks, C., Kearsley, G., Scudder, P., Monaco, A. P., Baird, G., Cox, A., Cockerill, H., Nuffield, F., Le Couteur, A., Berney, T., Cooper, H., Kelly, T., Green, J., Whittaker, J., Gilchrist, A., Bolton, P., Schönewald, A., Daker, M., Ogilvie, C., Docherty, Z., Deans, Z., Bolton, B., Packer, R., Poustka, F., Rühl, D., Schmötzer, G., Bölte, S., Klauck, S. M., Spieler, A., Poustka., A., Van Engeland, H., Kemner, C., De Jonge, M., Den Hartog, I., Lord, C., Cook, E., Leventhal, B., Volkmar, F., Pauls, D., Klin, A., Smalley, S., Fombonne, E., Rogé, B., Tauber, M., Arti-Vartayan, E., Fremolle-Kruck., J., Pederson, L., Haracopos, D., Brondum-Nielsen, K., & Cotterill, R. (1998). A full genome screen for autism with evidence for linkage to a region on chromosome 7q. International Molecular Genetic Study of Autism Consortium. Human Molecular Genetics, 7(3), 571-578. doi:10.1093/hmg/7.3.571.

    Abstract

    Autism is characterized by impairments in reciprocal social interaction and communication, and restricted and sterotyped patterns of interests and activities. Developmental difficulties are apparent before 3 years of age and there is evidence for strong genetic influences most likely involving more than one susceptibility gene. A two-stage genome search for susceptibility loci in autism was performed on 87 affected sib pairs plus 12 non-sib affected relative-pairs, from a total of 99 families identified by an international consortium. Regions on six chromosomes (4, 7, 10, 16, 19 and 22) were identified which generated a multipoint maximum lod score (MLS) > 1. A region on chromosome 7q was the most significant with an MLS of 3.55 near markers D7S530 and D7S684 in the subset of 56 UK affected sib-pair families, and an MLS of 2.53 in all 87 affected sib-pair families. An area on chromosome 16p near the telomere was the next most significant, with an MLS of 1.97 in the UK families, and 1.51 in all families. These results are an important step towards identifying genes predisposing to autism; establishing their general applicability requires further study.
  • Bastiaansen, M. C. M., & Hagoort, P. (2003). Event-induced theta responses as a window on the dynamics of memory. Cortex, 39(4-5), 967-972. doi:10.1016/S0010-9452(08)70873-6.

    Abstract

    An important, but often ignored distinction in the analysis of EEG signals is that between evoked activity and induced activity. Whereas evoked activity reflects the summation of transient post-synaptic potentials triggered by an event, induced activity, which is mainly oscillatory in nature, is thought to reflect changes in parameters controlling dynamic interactions within and between brain structures. We hypothesize that induced activity may yield information about the dynamics of cell assembly formation, activation and subsequent uncoupling, which may play a prominent role in different types of memory operations. We then describe a number of analysis tools that can be used to study the reactivity of induced rhythmic activity, both in terms of amplitude changes and of phase variability.

    We briefly discuss how alpha, gamma and theta rhythms are thought to be generated, paying special attention to the hypothesis that the theta rhythm reflects dynamic interactions between the hippocampal system and the neocortex. This hypothesis would imply that studying the reactivity of scalp-recorded theta may provide a window on the contribution of the hippocampus to memory functions.

    We review studies investigating the reactivity of scalp-recorded theta in paradigms engaging episodic memory, spatial memory and working memory. In addition, we review studies that relate theta reactivity to processes at the interface of memory and language. Despite many unknowns, the experimental evidence largely supports the hypothesis that theta activity plays a functional role in cell assembly formation, a process which may constitute the neural basis of memory formation and retrieval. The available data provide only highly indirect support for the hypothesis that scalp-recorded theta yields information about hippocampal functioning. It is concluded that studying induced rhythmic activity holds promise as an additional important way to study brain function.
  • Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.

    Abstract

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD.
  • Bastiaansen, M. C. M., Böcker, K. B. E., Cluitmans, P. J. M., & Brunia, C. H. M. (1999). Event-related desynchronization related to the anticipation of a stimulus providing knowledge of results. Clinical Neurophysiology, 110, 250-260.

    Abstract

    In the present paper, event-related desynchronization (ERD) in the alpha and beta frequency bands is quantified in order to investigate the processes related to the anticipation of a knowledge of results (KR) stimulus. In a time estimation task, 10 subjects were instructed to press a button 4 s after the presentation of an auditory stimulus. Two seconds after the response they received auditory or visual feedback on the timing of their response. Preceding the button press, a centrally maximal ERD is found. Preceding the visual KR stimulus, an ERD is present that has an occipital maximum. Contrary to expectation, preceding the auditory KR stimulus there are no signs of a modalityspecific ERD. Results are related to a thalamo-cortical gating model which predicts a correspondence between negative slow potentials and ERD during motor preparation and stimulus anticipation.
  • Bauer, B. L. M. (1998). Impersonal verbs in Italic. Their development from an Indo-European perspective. Journal of Indo-European Studies, 26, 91-120.
  • Bauer, B. L. M. (1998). Language loss in Gaul: Socio-historical and linguistic factors in language conflict. Southwest Journal of Linguistics, 15, 23-44.
  • Bauer, B. L. M. (1997). Response to David Lightfoot’s Review of The Emergence and Development of SVO Patterning in Latin and French: Diachronic and Psycholinguistic Perspectives. Language, 73(2), 352-358.
  • Beattie, G. W., Cutler, A., & Pearson, M. (1982). Why is Mrs Thatcher interrupted so often? [Letters to Nature]. Nature, 300, 744-747. doi:10.1038/300744a0.

    Abstract

    If a conversation is to proceed smoothly, the participants have to take turns to speak. Studies of conversation have shown that there are signals which speakers give to inform listeners that they are willing to hand over the conversational turn1−4. Some of these signals are part of the text (for example, completion of syntactic segments), some are non-verbal (such as completion of a gesture), but most are carried by the pitch, timing and intensity pattern of the speech; for example, both pitch and loudness tend to drop particularly low at the end of a speaker's turn. When one speaker interrupts another, the two can be said to be disputing who has the turn. Interruptions can occur because one participant tries to dominate or disrupt the conversation. But it could also be the case that mistakes occur in the way these subtle turn-yielding signals are transmitted and received. We demonstrate here that many interruptions in an interview with Mrs Margaret Thatcher, the British Prime Minister, occur at points where independent judges agree that her turn appears to have finished. It is suggested that she is unconsciously displaying turn-yielding cues at certain inappropriate points. The turn-yielding cues responsible are identified.
  • Bierwisch, M. (1997). Universal Grammar and the Basic Variety. Second Language Research, 13(4), 348-366. doi:10.1177/026765839701300403.

    Abstract

    The Basic Variety (BV) as conceived by Klein and Perdue (K&P) is a relatively stable state in the process of spontaneous (adult) second language acquisition, characterized by a small set of phrasal, semantic and pragmatic principles. These principles are derived by inductive generalization from a fairly large body of data. They are considered by K&P as roughly equivalent to those of Universal Grammar (UG) in the sense of Chomsky's Minimalist Program, with the proviso that the BV allows for only weak (or unmarked) formal features. The present article first discusses the viability of the BV principles proposed by K&P, arguing that some of them are in need of clarification with learner varieties, and that they are, in any case, not likely to be part of UG, as they exclude phenomena (e.g., so-called psych verbs) that cannot be ruled out even from the core of natural language. The article also considers the proposal that learner varieties of the BV type are completely unmarked instantiations of UG. Putting aside problems arising from the Minimalist Program, especially the question whether a grammar with only weak features would be a factual possibility and what it would look like, it is argued that the BV as characterized by K&P must be considered as the result of a process that crucially differs from first language acquisition as furnished by UG for a number of reasons, including properties of the BV itself. As a matter of fact, several of the properties claimed for the BV by K&P are more likely the result of general learning strategies than of language-specific principles. If this is correct, the characterization of the BV is a fairly interesting result, albeit of a rather different type than K&P suggest.
  • Blair, H. J., Ho, M., Monaco, A. P., Fisher, S. E., Craig, I. W., & Boyd, Y. (1995). High-resolution comparative mapping of the proximal region of the mouse X chromosome. Genomics, 28(2), 305-310. doi:10.1006/geno.1995.1146.

    Abstract

    The murine homologues of the loci for McLeod syndrome (XK), Dent's disease (CICN5), and synaptophysin (SYP) have been mapped to the proximal region of the mouse X chromosome and positioned with respect to other conserved loci in this region using a total of 948 progeny from two separate Mus musculus x Mus spretus backcrosses. In the mouse, the order of loci and evolutionary breakpoints (EB) has been established as centromere-(DXWas70, DXHXF34h)-EB-Clcn5-(Syp, DXMit55, DXMit26)-Tfe3-Gata1-EB-Xk-Cybb-telomere. In the proximal region of the human X chromosome short arm, the position of evolutionary breakpoints with respect to key loci has been established as DMD-EB-XK-PFC-EB-GATA1-C1CN5-EB-DXS1272E-ALAS2-E B-DXF34-centromere. These data have enabled us to construct a high-resolution genetic map for the approximately 3-cM interval between DXWas70 and Cybb on the mouse X chromosome, which encompasses 10 loci. This detailed map demonstrates the power of high-resolution genetic mapping in the mouse as a means of determining locus order in a small chromosomal region and of providing an accurate framework for the construction of physical maps.
  • Bock, K., Irwin, D. E., Davidson, D. J., & Levelt, W. J. M. (2003). Minding the clock. Journal of Memory and Language, 48, 653-685. doi:10.1016/S0749-596X(03)00007-X.

    Abstract

    Telling time is an exercise in coordinating language production with visual perception. By coupling different ways of saying times with different ways of seeing them, the performance of time-telling can be used to track cognitive transformations from visual to verbal information in connected speech. To accomplish this, we used eyetracking measures along with measures of speech timing during the production of time expressions. Our findings suggest that an effective interface between what has been seen and what is to be said can be constructed within 300 ms. This interface underpins a preverbal plan or message that appears to guide a comparatively slow, strongly incremental formulation of phrases. The results begin to trace the divide between seeing and saying -or thinking and speaking- that must be bridged during the creation of even the most prosaic utterances of a language.
  • Böcker, K. B. E., Bastiaansen, M. C. M., Vroomen, J., Brunia, C. H. M., & de Gelder, B. (1999). An ERP correlate of metrical stress in spoken word recognition. Psychophysiology, 36, 706-720. doi:10.1111/1469-8986.3660706.

    Abstract

    Rhythmic properties of spoken language such as metrical stress, that is, the alternation of strong and weak syllables, are important in speech recognition of stress-timed languages such as Dutch and English. Nineteen subjects listened passively to or discriminated actively between sequences of bisyllabic Dutch words, which started with either a weak or a strong syllable. Weak-initial words, which constitute 12% of the Dutch lexicon, evoked more negativity than strong-initial words in the interval between P2 and N400 components of the auditory event-related potential. This negativity was denoted as N325. The N325 was larger during stress discrimination than during passive listening. N325 was also larger when a weak-initial word followed sequence of strong-initial words than when it followed words with the same stress pattern. The latter difference was larger for listeners who performed well on stress discrimination. It was concluded that the N325 is probably a manifestation of the extraction of metrical stress from the acoustic signal and its transformation into task requirements.
  • Bohnemeyer, J. (2003). Invisible time lines in the fabric of events: Temporal coherence in Yukatek narratives. Journal of Linguistic Anthropology, 13(2), 139-162. doi:10.1525/jlin.2003.13.2.139.

    Abstract

    This article examines how narratives are structured in a language in which event order is largely not coded. Yucatec Maya lacks both tense inflections and temporal connectives corresponding to English after and before. It is shown that the coding of events in Yucatec narratives is subject to a strict iconicity constraint within paragraph boundaries. Aspectual viewpoint shifting is used to reconcile iconicity preservation with the requirements of a more flexible narrative structure.
  • Bohnemeyer, J. (2000). Event order in language and cognition. Linguistics in the Netherlands, 17(1), 1-16. doi:10.1075/avt.17.04boh.
  • Boland, J. E., & Cutler, A. (1995). Interaction with autonomy: Defining multiple output models in psycholinguistic theory. Working Papers in Linguistic, 45, 1-10. Retrieved from http://hdl.handle.net/2066/15768.

    Abstract

    There are currently a number of psycholinguistic models in which processing at a particular level of representation is characterized by the generation of multiple outputs, with resolution involving the use of information from higher levels of processing. Surprisingly, models with this architecture have been characterized as autonomous within the domain of word recognition and as interactive within the domain of sentence processing. We suggest that the apparent internal confusion is not, as might be assumed, due to fundamental differences between lexical and syntactic processing. Rather, we believe that the labels in each domain were chosen in order to obtain maximal contrast between a new model and the model or models that were currently dominating the field.
  • Boland, J. E., & Cutler, A. (1995). Interaction with autonomy: Multiple Output models and the inadequacy of the Great Divide. Cognition, 58, 309-320. doi:10.1016/0010-0277(95)00684-2.

    Abstract

    There are currently a number of psycholinguistic models in which processing at a particular level of representation is characterized by the generation of multiple outputs, with resolution - but not generation - involving the use of information from higher levels of processing. Surprisingly, models with this architecture have been characterized as autonomous within the domain of word recognition but as interactive within the domain of sentence processing. We suggest that the apparent confusion is not, as might be assumed, due to fundamental differences between lexical and syntactic processing. Rather, we believe that the labels in each domain were chosen in order to obtain maximal contrast between a new model and the model or models that were currently dominating the field. The contradiction serves to highlight the inadequacy of a simple autonomy/interaction dichotomy for characterizing the architectures of current processing models.
  • Böttner, M. (1998). A collective extension of relational grammar. Logic Journal of the IGPL, 6(2), 175-793. doi:10.1093/jigpal/6.2.175.

    Abstract

    Relational grammar was proposed in Suppes (1976) as a semantical grammar for natural language. Fragments considered so far are restricted to distributive notions. In this article, relational grammar is extended to collective notions.
  • Bowerman, M. (1971). [Review of A. Bar Adon & W.F. Leopold (Eds.), Child language: A book of readings (Prentice Hall, 1971)]. Contemporary Psychology: APA Review of Books, 16, 808-809.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Li, P., & Bowerman, M. (1998). The acquisition of lexical and grammatical aspect in Chinese. First Language, 18, 311-350. doi:10.1177/014272379801805404.

    Abstract

    This study reports three experiments on how children learning Mandarin Chinese comprehend and use aspect markers. These experiments examine the role of lexical aspect in children's acquisition of grammatical aspect. Results provide converging evidence for children's early sensitivity to (1) the association between atelic verbs and the imperfective aspect markers zai, -zhe, and -ne, and (2) the association between telic verbs and the perfective aspect marker -le. Children did not show a sensitivity in their use or understanding of aspect markers to the difference between stative and activity verbs or between semelfactive and activity verbs. These results are consistent with Slobin's (1985) basic child grammar hypothesis that the contrast between process and result is important in children's early acquisition of temporal morphology. In contrast, they are inconsistent with Bickerton's (1981, 1984) language bioprogram hypothesis that the distinctions between state and process and between punctual and nonpunctual are preprogrammed into language learners. We suggest new ways of looking at the results in the light of recent probabilistic hypotheses that emphasize the role of input, prototypes and connectionist representations.
  • Brown, P. (1998). Children's first verbs in Tzeltal: Evidence for an early verb category. Linguistics, 36(4), 713-753.

    Abstract

    A major finding in studies of early vocabulary acquisition has been that children tend to learn a lot of nouns early but make do with relatively few verbs, among which semantically general-purpose verbs like do, make, get, have, give, come, go, and be play a prominent role. The preponderance of nouns is explained in terms of nouns labelling concrete objects beings “easier” to learn than verbs, which label relational categories. Nouns label “natural categories” observable in the world, verbs label more linguistically and culturally specific categories of events linking objects belonging to such natural categories (Gentner 1978, 1982; Clark 1993). This view has been challenged recently by data from children learning certain non-Indo-European languges like Korean, where children have an early verb explosion and verbs dominate in early child utterances. Children learning the Mayan language Tzeltal also acquire verbs early, prior to any noun explosion as measured by production. Verb types are roughly equivalent to noun types in children’s beginning production vocabulary and soon outnumber them. At the one-word stage children’s verbs mostly have the form of a root stripped of affixes, correctly segmented despite structural difficulties. Quite early (before the MLU 2.0 point) there is evidence of productivity of some grammatical markers (although they are not always present): the person-marking affixes cross-referencing core arguments, and the completive/incompletive aspectual distinctions. The Tzeltal facts argue against a natural-categories explanation for childre’s early vocabulary, in favor of a view emphasizing the early effects of language-specific properties of the input. They suggest that when and how a child acquires a “verb” category is centrally influenced by the structural properties of the input, and that the semantic structure of the language - where the referential load is concentrated - plays a fundamental role in addition to distributional facts.
  • Brown, P. (1998). Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech. Journal of Linguistic Anthropology, 8(2), 197-221. doi:10.1525/jlin.1998.8.2.197.

    Abstract

    When Tzeltal children in the Mayan community of Tenejapa, in southern Mexico, begin speaking, their production vocabulary consists predominantly of verb roots, in contrast to the dominance of nouns in the initial vocabulary of first‐language learners of Indo‐European languages. This article proposes that a particular Tzeltal conversational feature—known in the Mayanist literature as "dialogic repetition"—provides a context that facilitates the early analysis and use of verbs. Although Tzeltal babies are not treated by adults as genuine interlocutors worthy of sustained interaction, dialogic repetition in the speech the children are exposed to may have an important role in revealing to them the structural properties of the language, as well as in socializing the collaborative style of verbal interaction adults favor in this community.
  • Brown, C. M., Hagoort, P., & Ter Keurs, M. (1999). Electrophysiological signatures of visual lexical processing: open en closed-class words. Journal of Cognitive Neuroscience, 11(3), 261-281.

    Abstract

    In this paper presents evidence of the disputed existence of an electrophysiological marker for the lexical-categorical distinction between open- and closed-class words. Event-related brain potentials were recorded from the scalp while subjects read a story. Separate waveforms were computed for open- and closed-class words. Two aspects of the waveforms could be reliably related to vocabulary class. The first was an early negativity in the 230- to 350-msec epoch, with a bilateral anterior predominance. This negativity was elicited by open- and closed-class words alike, was not affected by word frequency or word length, and had an earlier peak latency for closed-class words. The second was a frontal slow negative shift in the 350- to 500-msec epoch, largest over the left side of the scalp. This late negativity was only elicited by closed-class words. Although the early negativity cannot serve as a qualitative marker of the open- and closed-class distinction, it does reflect the earliest electrophysiological manifestation of the availability of categorical information from the mental lexicon. These results suggest that the brain honors the distinction between open- and closed-class words, in relation to the different roles that they play in on-line sentence processing.
  • Brown, C. M., Van Berkum, J. J. A., & Hagoort, P. (2000). Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding. Journal of Psycholinguistic Research, 29(1), 53-68. doi:10.1023/A:1005172406969.

    Abstract

    A study is presented on the effects of discourse–semantic and lexical–syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior discourse–semantic information biased toward one analysis of the temporary ambiguity, whereas the lexical-syntactic information allowed only for the alternative analysis. The ERP results show that discourse–semantic information can momentarily take precedence over syntactic information, even if this violates grammatical gender agreement rules.
  • Brown, C. M., Hagoort, P., & Chwilla, D. J. (2000). An event-related brain potential analysis of visual word priming effects. Brain and Language, 72, 158-190. doi:10.1006/brln.1999.2284.

    Abstract

    Two experiments are reported that provide evidence on task-induced effects during
    visual lexical processing in a primetarget semantic priming paradigm. The research focuses on target expectancy effects by manipulating the proportion of semantically related and unrelated word pairs. In Experiment 1, a lexical decision task was used and reaction times (RTs) and event-related brain potentials (ERPs) were obtained. In Experiment 2, subjects silently read the stimuli, without any additional task demands, and ERPs were recorded. The RT and ERP results of Experiment 1 demonstrate that an expectancy mechanism contributed to the priming effect when a high proportion of related word pairs was presented. The ERP results of Experiment 2 show that in the absence of extraneous task requirements, an expectancy mechanism is not active. However, a standard ERP semantic priming effect was obtained in Experiment 2. The combined results show that priming effects due to relatedness proportion are induced by task demands and are not a standard aspect of online lexical processing.
  • Brown, P. (1999). Anthropologie cognitive. Anthropologie et Sociétés, 23(3), 91-119.

    Abstract

    In reaction to the dominance of universalism in the 1970s and '80s, there have recently been a number of reappraisals of the relation between language and cognition, and the field of cognitive anthropology is flourishing in several new directions in both America and Europe. This is partly due to a renewal and re-evaluation of approaches to the question of linguistic relativity associated with Whorf, and partly to the inspiration of modern developments in cognitive science. This review briefly sketches the history of cognitive anthropology and surveys current research on both sides of the Atlantic. The focus is on assessing current directions, considering in particular, by way of illustration, recent work in cultural models and on spatial language and cognition. The review concludes with an assessment of how cognitive anthropology could contribute directly both to the broader project of cognitive science and to the anthropological study of how cultural ideas and practices relate to structures and processes of human cognition.
  • Brown, P. (1998). [Review of the book by A.J. Wootton, Interaction and the development of mind]. Journal of the Royal Anthropological Institute, 4(4), 816-817.
  • Brown, P. (1998). La identificación de las raíces verbales en Tzeltal (Maya): Cómo lo hacen los niños? Función, 17-18, 121-146.

    Abstract

    This is a Spanish translation of Brown 1997.
  • Brown, P. (1999). Repetition [Encyclopedia entry for 'Lexicon for the New Millenium', ed. Alessandro Duranti]. Journal of Linguistic Anthropology, 9(2), 223-226. doi:10.1525/jlin.1999.9.1-2.223.

    Abstract

    This is an encyclopedia entry describing conversational and interactional uses of linguistic repetition.
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Carlsson, K., Petrovic, P., Skare, S., Petersson, K. M., & Ingvar, M. (2000). Tickling expectations: Neural processing in anticipation of a sensory stimulus. Journal of Cognitive Neuroscience, 12(4), 691-703. doi:10.1162/089892900562318.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Choi, S., McDonough, L., Bowerman, M., & Mandler, J. M. (1999). Early sensitivity to language-specific spatial categories in English and Korean. Cognitive Development, 14, 241-268. doi:10.1016/S0885-2014(99)00004-0.

    Abstract

    This study investigates young children’s comprehension of spatial terms in two languages that categorize space strikingly differently. English makes a distinction between actions resulting in containment (put in) versus support or surface attachment (put on), while Korean makes a cross-cutting distinction between tight-fit relations (kkita) versus loose-fit or other contact relations (various verbs). In particular, the Korean verb kkita refers to actions resulting in a tight-fit relation regardless of containment or support. In a preferential looking study we assessed the comprehension of in by 20 English learners and kkita by 10 Korean learners, all between 18 and 23 months. The children viewed pairs of scenes while listening to sentences with and without the target word. The target word led children to gaze at different and language-appropriate aspects of the scenes. We conclude that children are sensitive to language-specific spatial categories by 18–23 months.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Chwilla, D., Brown, C. M., & Hagoort, P. (1995). The N400 as a function of the level of processing. Psychophysiology, 32, 274-285. doi:10.1111/j.1469-8986.1995.tb02956.x.

    Abstract

    In a semantic priming paradigm, the effects of different levels of processing on the N400 were assessed by changing the task demands. In the lexical decision task, subjects had to discriminate between words and nonwords and in the physical task, subjects had to discriminate between uppercase and lowercase letters. The proportion of related versus unrelated word pairs differed between conditions. A lexicality test on reaction times demonstrated that the physical task was performed nonlexically. Moreover, a semantic priming reaction time effect was obtained only in the lexical decision task. The level of processing clearly affected the event-related potentials. An N400 priming effect was only observed in the lexical decision task. In contrast, in the physical task a P300 effect was observed for either related or unrelated targets, depending on their frequency of occurrence. Taken together, the results indicate that an N400 priming effect is only evoked when the task performance induces the semantic aspects of words to become part of an episodic trace of the stimulus event.
  • Clifton, Jr., C., Cutler, A., McQueen, J. M., & Van Ooijen, B. (1999). The processing of inflected forms. [Commentary on H. Clahsen: Lexical entries and rules of language.]. Behavioral and Brain Sciences, 22, 1018-1019.

    Abstract

    Clashen proposes two distinct processing routes, for regularly and irregularly inflected forms, respectively, and thus is apparently making a psychological claim. We argue his position, which embodies a strictly linguistic perspective, does not constitute a psychological processing model.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A. (1971). [Review of the book Probleme der Aufgabenanalyse bei der Erstellung von Sprachprogrammen by K. Bung]. Babel, 7, 29-31.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A., & Norris, D. (1999). Sharpening Ockham’s razor (Commentary on W.J.M. Levelt, A. Roelofs & A.S. Meyer: A theory of lexical access in speech production). Behavioral and Brain Sciences, 22, 40-41.

    Abstract

    Language production and comprehension are intimately interrelated; and models of production and comprehension should, we argue, be constrained by common architectural guidelines. Levelt et al.'s target article adopts as guiding principle Ockham's razor: the best model of production is the simplest one. We recommend adoption of the same principle in comprehension, with consequent simplification of some well-known types of models.
  • Cutler, A., & Otake, T. (1999). Pitch accent in spoken-word recognition in Japanese. Journal of the Acoustical Society of America, 105, 1877-1888.

    Abstract

    Three experiments addressed the question of whether pitch-accent information may be exploited in the process of recognizing spoken words in Tokyo Japanese. In a two-choice classification task, listeners judged from which of two words, differing in accentual structure, isolated syllables had been extracted ~e.g., ka from baka HL or gaka LH!; most judgments were correct, and listeners’ decisions were correlated with the fundamental frequency characteristics of the syllables. In a gating experiment, listeners heard initial fragments of words and guessed what the words were; their guesses overwhelmingly had the same initial accent structure as the gated word even when only the beginning CV of the stimulus ~e.g., na- from nagasa HLL or nagashi LHH! was presented. In addition, listeners were more confident in guesses with the same initial accent structure as the stimulus than in guesses with different accent. In a lexical decision experiment, responses to spoken words ~e.g., ame HL! were speeded by previous presentation of the same word ~e.g., ame HL! but not by previous presentation of a word differing only in accent ~e.g., ame LH!. Together these findings provide strong evidence that accentual information constrains the activation and selection of candidates for spoken-word recognition.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A. (1997). The syllable’s role in the segmentation of stress languages. Language and Cognitive Processes, 12, 839-845. doi:10.1080/016909697386718.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Damian, M. F., & Abdel Rahman, R. (2003). Semantic priming in the naming of objects and famous faces. British Journal of Psychology, 94(4), 517-527.

    Abstract

    Researchers interested in face processing have recently debated whether access to the name of a known person occurs in parallel with retrieval of semantic-biographical codes, rather than in a sequential fashion. Recently, Schweinberger, Burton, and Kelly (2001) took a failure to obtain a semantic context effect in a manual syllable judgment task on names of famous faces as support for this position. In two experiments, we compared the effects of visually presented categorically related prime words with either objects (e.g. prime: animal; target: dog) or faces of celebrities (e.g. prime: actor; target: Bruce Willis) as targets. Targets were either manually categorized with regard to the number of syllables (as in Schweinberger et al.), or they were overtly named. For neither objects nor faces was semantic priming obtained in syllable decisions; crucially, however, priming was obtained when objects and faces were overtly named. These results suggest that both face and object naming are susceptible to semantic context effects
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Drozd, K. F. (1995). Child English pre-sentential negation as metalinguistic exclamatory sentence negation. Journal of Child Language, 22(3), 583-610. doi:10.1017/S030500090000996X.

    Abstract

    This paper presents a study of the spontaneous pre-sentential negations
    of ten English-speaking children between the ages of 1; 6 and 3; 4 which
    supports the hypothesis that child English nonanaphoric pre-sentential
    negation is a form of metalinguistic exclamatory sentence negation. A
    detailed discourse analysis reveals that children's pre-sentential negatives
    like No Nathaniel a king (i) are characteristically echoic, and (it)
    typically express objection and rectification, two characteristic functions
    of exclamatory negation in adult discourse, e.g. Don't say 'Nathaniel's a
    king'! A comparison of children's pre-sentential negations with their
    internal predicate negations using not and don't reveals that the two
    negative constructions are formally and functionally distinct. I argue
    that children's nonanaphoric pre-sentential negatives constitute an
    independent, well-formed class of discourse negation. They are not
    'primitive' constructions derived from the miscategorization of emphatic
    no in adult speech or children's 'inventions'. Nor are they an
    early derivational variant of internal sentence negation. Rather, these
    negatives reflect young children's competence in using grammatical
    negative constructions appropriately in discourse.
  • Dunn, M. (2003). Pioneers of Island Melanesia project. Oceania Newsletter, 30/31, 1-3.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
  • Enfield, N. J. (2003). Producing and editing diagrams using co-speech gesture: Spatializing non-spatial relations in explanations of kinship in Laos. Journal of Linguistic Anthropology, 13(1), 7-50. doi:10.1525/jlin.2003.13.1.7.

    Abstract

    This article presents a description of two sequences of talk by urban speakers of Lao (a southwestern Tai language spoken in Laos) in which co-speech gesture plays a central role in explanations of kinship relations and terminology. The speakers spontaneously use hand gestures and gaze to spatially diagram relationships that have no inherent spatial structure. The descriptive sections of the article are prefaced by a discussion of the semiotic complexity of illustrative gestures and gesture diagrams. Gestured signals feature iconic, indexical, and symbolic components, usually in combination, as well as using motion and three-dimensional space to convey meaning. Such diagrams show temporal persistence and structural integrity despite having been projected in midair by evanescent signals (i.e., handmovements anddirected gaze). Speakers sometimes need or want to revise these spatial representations without destroying their structural integrity. The need to "edit" gesture diagrams involves such techniques as hold-and-drag, hold-and-work-with-free-hand, reassignment-of-old-chunk-tonew-chunk, and move-body-into-new-space.
  • Enfield, N. J. (2003). The definition of WHAT-d'you-call-it: Semantics and pragmatics of 'recognitional deixis'. Journal of Pragmatics, 35(1), 101-117. doi:10.1016/S0378-2166(02)00066-8.

    Abstract

    Words such as what -d'you-call-it raise issues at the heart of the semantics/pragmatics interface. Expressions of this kind are conventionalised and have meanings which, while very general, are explicitly oriented to the interactional nature of the speech context, drawing attention to a speaker's assumption that the listener can figure out what the speaker is referring to. The details of such meanings can account for functional contrast among similar expressions, in a single language as well as cross-linguistically. The English expressions what -d'you-call-it and you-know-what are compared, along with a comparable Lao expression meaning, roughly, ‘that thing’. Proposed definitions of the meanings of these expressions account for their different patterns of use. These definitions include reference to the speech act participants, a point which supports the view that what -d'you-call-it words can be considered deictic. Issues arising from the descriptive section of this paper include the question of how such terms are derived, as well as their degree of conventionality.
  • Enfield, N. J. (2003). Demonstratives in space and interaction: Data from Lao speakers and implications for semantic analysis. Language, 79(1), 82-117.

    Abstract

    The semantics of simple (i.e. two-term) systems of demonstratives have in general hitherto been treated as inherently spatial and as marking a symmetrical opposition of distance (‘proximal’ versus ‘distal’), assuming the speaker as a point of origin. More complex systems are known to add further distinctions, such as visibility or elevation, but are assumed to build on basic distinctions of distance. Despite their inherently context-dependent nature, little previous work has based the analysis of demonstratives on evidence of their use in real interactional situations. In this article, video recordings of spontaneous interaction among speakers of Lao (Southwestern Tai, Laos) are examined in an analysis of the two Lao demonstrative determiners nii4 and nan4. A hypothesis of minimal encoded semantics is tested against rich contextual information, and the hypothesis is shown to be consistent with the data. Encoded conventional meanings must be kept distinct from contingent contextual information and context-dependent pragmatic implicatures. Based on examples of the two Lao demonstrative determiners in exophoric uses, the following claims are made. The term nii4 is a semantically general demonstrative, lacking specification of ANY spatial property (such as location or distance). The term nan4 specifies that the referent is ‘not here’ (encoding ‘location’ but NOT ‘distance’). Anchoring the semantic specification in a deictic primitive ‘here’ allows a strictly discrete intensional distinction to be mapped onto an extensional range of endless elasticity. A common ‘proximal’ spatial interpretation for the semantically more general term nii4 arises from the paradigmatic opposition of the two demonstrative determiners. This kind of analysis suggests a reappraisal of our general understanding of the semantics of demonstrative systems universally. To investigate the question in sufficient detail, however, rich contextual data (preferably collected on video) is necessary
  • Enfield, N. J. (1999). On the indispensability of semantics: Defining the ‘vacuous’. Rask: internationalt tidsskrift for sprog og kommunikation, 9/10, 285-304.
  • Enfield, N. J. (1997). Review of 'Give: a cognitive linguistic study', by John Newman. Australian Journal of Linguistics, 17(1), 89-92. doi:10.1080/07268609708599546.
  • Enfield, N. J. (1997). Review of 'Plastic glasses and church fathers: semantic extension from the ethnoscience tradition', by David Kronenfeld. Anthropological Linguistics, 39(3), 459-464. Retrieved from http://www.jstor.org/stable/30028999.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Ernestus, M., & Baayen, R. H. (2003). Predicting the unpredictable: The phonological interpretation of neutralized segments in Dutch. Language, 79(1), 5-38.

    Abstract

    Among the most fascinating data for phonology are those showing how speakers incorporate new words and foreign words into their language system, since these data provide cues to the actual principles underlying language. In this article, we address how speakers deal with neutralized obstruents in new words. We formulate four hypotheses and test them on the basis of Dutch word-final obstruents, which are neutral for [voice]. Our experiments show that speakers predict the characteristics ofneutralized segments on the basis ofphonologically similar morphemes stored in the mental lexicon. This effect of the similar morphemes can be modeled in several ways. We compare five models, among them STOCHASTIC OPTIMALITY THEORY and ANALOGICAL MODELING OF LANGUAGE; all perform approximately equally well, but they differ in their complexity, with analogical modeling oflanguage providing the most economical explanation.
  • Fear, B. D., Cutler, A., & Butterfield, S. (1995). The strong/weak syllable distinction in English. Journal of the Acoustical Society of America, 97, 1893-1904. doi:10.1121/1.412063.

    Abstract

    Strong and weak syllables in English can be distinguished on the basis of vowel quality, of stress, or of both factors. Critical for deciding between these factors are syllables containing unstressed unreduced vowels, such as the first syllable of automata. In this study 12 speakers produced sentences containing matched sets of words with initial vowels ranging from stressed to reduced, at normal and at fast speech rates. Measurements of the duration, intensity, F0, and spectral characteristics of the word-initial vowels showed that unstressed unreduced vowels differed significantly from both stressed and reduced vowels. This result held true across speaker sex and dialect. The vowels produced by one speaker were then cross-spliced across the words within each set, and the resulting words' acceptability was rated by listeners. In general, cross-spliced words were only rated significantly less acceptable than unspliced words when reduced vowels interchanged with any other vowel. Correlations between rated acceptability and acoustic characteristics of the cross-spliced words demonstrated that listeners were attending to duration, intensity, and spectral characteristics. Together these results suggest that unstressed unreduced vowels in English pattern differently from both stressed and reduced vowels, so that no acoustic support for a binary categorical distinction exists; nevertheless, listeners make such a distinction, grouping unstressed unreduced vowels by preference with stressed vowels
  • Felser, C., Roberts, L., Marinis, T., & Gross, R. (2003). The processing of ambiguous sentences by first and second language learners of English. Applied Psycholinguistics, 24(3), 453-489.

    Abstract

    This study investigates the way adult second language (L2) learners of English resolve relative clause attachment ambiguities in sentences such as The dean liked the secretary of the professor who was reading a letter. Two groups of advanced L2 learners of English with Greek or German as their first language participated in a set of off-line and on-line tasks. The results indicate that the L2 learners do not process ambiguous sentences of this type in the same way as adult native speakers of English do. Although the learners’ disambiguation preferences were influenced by lexical–semantic properties of the preposition linking the two potential antecedent noun phrases (of vs. with), there was no evidence that they applied any phrase structure–based ambiguity resolution strategies of the kind that have been claimed to influence sentence processing in monolingual adults. The L2 learners’ performance also differs markedly from the results obtained from 6- to 7-year-old monolingual English children in a parallel auditory study, in that the children’s attachment preferences were not affected by the type of preposition at all. We argue that children, monolingual adults, and adult L2 learners differ in the extent to which they are guided by phrase structure and lexical–semantic information during sentence processing.
  • Fisher, S. E., Stein, J. F., & Monaco, A. P. (1999). A genome-wide search strategy for identifying quantitative trait loci involved in reading and spelling disability (developmental dyslexia). European Child & Adolescent Psychiatry, 8(suppl. 3), S47-S51. doi:10.1007/PL00010694.

    Abstract

    Family and twin studies of developmental dyslexia have consistently shown that there is a significant heritable component for this disorder. However, any genetic basis for the trait is likely to be complex, involving reduced penetrance, phenocopy, heterogeneity and oligogenic inheritance. This complexity results in reduced power for traditional parametric linkage analysis, where specification of the correct genetic model is important. One strategy is to focus on large multigenerational pedigrees with severe phenotypes and/or apparent simple Mendelian inheritance, as has been successfully demonstrated for speech and language impairment. This approach is limited by the scarcity of such families. An alternative which has recently become feasible due to the development of high-throughput genotyping techniques is the analysis of large numbers of sib-pairs using allele-sharing methodology. This paper outlines our strategy for conducting a systematic genome-wide search for genes involved in dyslexia in a large number of affected sib-pair familites from the UK. We use a series of psychometric tests to obtain different quantitative measures of reading deficit, which should correlate with different components of the dyslexia phenotype, such as phonological awareness and orthographic coding ability. This enable us to use QTL (quantitative trait locus) mapping as a powerful tool for localising genes which may contribute to reading and spelling disability.
  • Fisher, S. E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D. F., Richardson, A. J., Weeks, D. E., Stein, J. F., & Monaco, A. P. (1999). A quantitative-trait locus on chromosome 6p influences different aspects of developmental dyslexia. American Journal of Human Genetics, 64(1), 146-156. doi:10.1086/302190.

    Abstract

    Recent application of nonparametric-linkage analysis to reading disability has implicated a putative quantitative-trait locus (QTL) on the short arm of chromosome 6. In the present study, we use QTL methods to evaluate linkage to the 6p25-21.3 region in a sample of 181 sib pairs from 82 nuclear families that were selected on the basis of a dyslexic proband. We have assessed linkage directly for several quantitative measures that should correlate with different components of the phenotype, rather than using a single composite measure or employing categorical definitions of subtypes. Our measures include the traditional IQ/reading discrepancy score, as well as tests of word recognition, irregular-word reading, and nonword reading. Pointwise analysis by means of sib-pair trait differences suggests the presence, in 6p21.3, of a QTL influencing multiple components of dyslexia, in particular the reading of irregular words (P=.0016) and nonwords (P=.0024). A complementary statistical approach involving estimation of variance components supports these findings (irregular words, P=.007; nonwords, P=.0004). Multipoint analyses place the QTL within the D6S422-D6S291 interval, with a peak around markers D6S276 and D6S105 consistently identified by approaches based on trait differences (irregular words, P=.00035; nonwords, P=.0035) and variance components (irregular words, P=.007; nonwords, P=.0038). Our findings indicate that the QTL affects both phonological and orthographic skills and is not specific to phoneme awareness, as has been previously suggested. Further studies will be necessary to obtain a more precise localization of this QTL, which may lead to the isolation of one of the genes involved in developmental dyslexia.
  • Fisher, S. E., Hatchwell, E., Chand, A., Ockenden, N., Monaco, A. P., & Craig, I. W. (1995). Construction of two YAC contigs in human Xp11.23-p11.22, one encompassing the loci OATL1, GATA, TFE3, and SYP, the other linking DXS255 to DXS146. Genomics, 29(2), 496-502. doi:10.1006/geno.1995.9976.

    Abstract

    We have constructed two YAC contigs in the Xp11.23-p11.22 interval of the human X chromosome, a region that was previously poorly characterized. One contig, of at least 1.4 Mb, links the pseudogene OATL1 to the genes GATA1, TFE3, and SYP and also contains loci implicated in Wiskott-Aldrich syndrome and synovial sarcoma. A second contig, mapping proximal to the first, is estimated to be over 2.1 Mb and links the hypervariable locus DXS255 to DXS146, and also contains a chloride channel gene that is responsible for hereditary nephrolithiasis. We have used plasmid rescue, inverse PCR, and Alu-PCR to generate 20 novel markers from this region, 1 of which is polymorphic, and have positioned these relative to one another on the basis of YAC analysis. The order of previously known markers within our contigs, Xpter-OATL1-GATA-TFE3-SYP-DXS255146- Xcen, agrees with genomic pulsed-field maps of the region. In addition, we have constructed a rare-cutter restriction map for a 710-kb region of the DXS255-DXS146 contig and have identified three CPG islands. These contigs and new markers will provide a useful resource for more detailed analysis of Xp11.23-p11.22, a region implicated in several genetic diseases.
  • Fisher, S. E., Lai, C. S., & Monaco, a. A. P. (2003). Deciphering the genetic basis of speech and language disorders. Annual Review of Neuroscience, 26, 57-80. doi:10.1146/annurev.neuro.26.041002.131144.

    Abstract

    A significant number of individuals have unexplained difficulties with acquiring normal speech and language, despite adequate intelligence and environmental stimulation. Although developmental disorders of speech and language are heritable, the genetic basis is likely to involve several, possibly many, different risk factors. Investigations of a unique three-generation family showing monogenic inheritance of speech and language deficits led to the isolation of the first such gene on chromosome 7, which encodes a transcription factor known as FOXP2. Disruption of this gene causes a rare severe speech and language disorder but does not appear to be involved in more common forms of language impairment. Recent genome-wide scans have identified at least four chromosomal regions that may harbor genes influencing the latter, on chromosomes 2, 13, 16, and 19. The molecular genetic approach has potential for dissecting neurological pathways underlying speech and language disorders, but such investigations are only just beginning.
  • Fisher, S. E., Van Bakel, I., Lloyd, S. E., Pearce, S. H. S., Thakker, R. V., & Craig, I. W. (1995). Cloning and characterization of CLCN5, the human kidney chloride channel gene implicated in Dent disease (an X-linked hereditary nephrolithiasis). Genomics, 29, 598-606. doi:10.1006/geno.1995.9960.

    Abstract

    Dent disease, an X-linked familial renal tubular disorder, is a form of Fanconi syndrome associated with proteinuria, hypercalciuria, nephrocalcinosis, kidney stones, and eventual renal failure. We have previously used positional cloning to identify the 3' part of a novel kidney-specific gene (initially termed hClC-K2, but now referred to as CLCN5), which is deleted in patients from one pedigree segregating Dent disease. Mutations that disrupt this gene have been identified in other patients with this disorder. Here we describe the isolation and characterization of the complete open reading frame of the human CLCN5 gene, which is predicted to encode a protein of 746 amino acids, with significant homology to all known members of the ClC family of voltage-gated chloride channels. CLCN5 belongs to a distinct branch of this family, which also includes the recently identified genes CLCN3 and CLCN4. We have shown that the coding region of CLCN5 is organized into 12 exons, spanning 25-30 kb of genomic DNA, and have determined the sequence of each exon-intron boundary. The elucidation of the coding sequence and exon-intron organization of CLCN5 will both expedite the evaluation of structure/function relationships of these ion channels and facilitate the screening of other patients with renal tubular dysfunction for mutations at this locus.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E., Ciccodicola, A., Tanaka, K., Curci, A., Desicato, S., D'urso, M., & Craig, I. W. (1997). Sequence-based exon prediction around the synaptophysin locus reveals a gene-rich area containing novel genes in human proximal Xp. Genomics, 45, 340-347. doi:10.1006/geno.1997.4941.

    Abstract

    The human Xp11.23-p11.22 interval has been implicated in several inherited diseases including Wiskott-Aldrich syndrome; three forms of X-linked hypercalciuric nephrolithiaisis; and the eye disorders retinitis pigmentosa 2, congenital stationary night blindness, and Aland Island eye disease. In constructing YAC contigs spanning Xp11. 23-p11.22, we have previously shown that the region around the synaptophysin (SYP) gene is refractory to cloning in YACs, but highly stable in cosmids. Preliminary analysis of the latter suggested that this might reflect a high density of coding sequences and we therefore undertook the complete sequencing of a SYP-containing cosmid. Sequence data were extensively analyzed using computer programs such as CENSOR (to mask repeats), BLAST (for homology searches), and GRAIL and GENE-ID (to predict exons). This revealed the presence of 29 putative exons, organized into three genes, in addition to the 7 exons of the complete SYP coding region, all mapping within a 44-kb interval. Two genes are novel, one (CACNA1F) showing high homology to alpha1 subunits of calcium channels, the other (LMO6) encoding a product with significant similarity to LIM-domain proteins. RT-PCR and Northern blot studies confirmed that these loci are indeed transcribed. The third locus is the previously described, but not previously localized, A4 differentiation-dependent gene. Given that the intron-exon boundaries predicted by the analysis are consistent with previous information where available, we have been able to suggest the genomic organization of the novel genes with some confidence. The region has an elevated GC content (>53%), and we identified CpG islands associated with the 5' ends of SYP, A4, and LMO6. The order of loci was Xpter-A4-LMO6-SYP-CACNA1F-Xcen, with intergenic distances ranging from approximately 300 bp to approximately 5 kb. The density of transcribed sequences in this area (>80%) is comparable to that found in the highly gene-rich chromosomal band Xq28. Further studies may aid our understanding of the long-range organization surrounding such gene-enriched regions.
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.

Share this page