Publications

Displaying 101 - 200 of 630
  • Cutler, A., & Broersma, M. (2005). Phonetic precision in listening. In W. J. Hardcastle, & J. M. Beck (Eds.), A figure of speech: A Festschrift for John Laver (pp. 63-91). Mahwah, NJ: Erlbaum.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Klein, W., & Levinson, S. C. (2005). The cornerstones of twenty-first century psycholinguistics. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 1-20). Mahwah, NJ: Erlbaum.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., Smits, R., & Cooper, N. (2005). Vowel perception: Effects of non-native language vs. non-native dialect. Speech Communication, 47(1-2), 32-42. doi:10.1016/j.specom.2005.02.001.

    Abstract

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). The identification performance of native listeners was significantly better than that of listeners with another language but did not significantly differ from the performance of listeners with another dialect. Dialect differences did however affect the type of perceptual confusions which listeners made; in particular, the Australian listeners’ judgements of vowel tenseness were more variable than the American listeners’ judgements, which may be ascribed to cross-dialectal differences in this vocalic feature. Although listening difficulty can result when speech input mismatches the native dialect in terms of the precise cues for and boundaries of phonetic categories, the difficulty is very much less than that which arises when speech input mismatches the native language in terms of the repertoire of phonemic categories available.
  • Cutler, A. (2005). Why is it so hard to understand a second language in noise? Newsletter, American Association of Teachers of Slavic and East European Languages, 48, 16-16.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A. (1980). Errors of stress and intonation. In V. A. Fromkin (Ed.), Errors in linguistic performance: Slips of the tongue, ear, pen and hand (pp. 67-80). New York: Academic Press.
  • Cutler, A. (2005). Lexical stress. In D. B. Pisoni, & R. E. Remez (Eds.), The handbook of speech perception (pp. 264-289). Oxford: Blackwell.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A. (1980). La leçon des lapsus. La Recherche, 11(112), 686-692.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A. (1980). Syllable omission errors and isochrony. In H. W. Dechet, & M. Raupach (Eds.), Temporal variables in speech: studies in honour of Frieda Goldman-Eisler (pp. 183-190). The Hague: Mouton.
  • Cutler, A. (1988). The perfect speech error. In L. Hyman, & C. Li (Eds.), Language, speech and mind: Studies in honor of Victoria A. Fromkin (pp. 209-223). London: Croom Helm.
  • Cutler, A., & Isard, S. D. (1980). The production of prosody. In B. Butterworth (Ed.), Language production (pp. 245-269). London: Academic Press.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin & Review, 12(3), 453-459.

    Abstract

    Participants' eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent's concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent's name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.
  • Davis, M. H., Johnsrude, I. S., Hervais-Adelman, A., Taylor, K., & McGettigan, C. (2005). Lexical information drives perceptual learning of distorted speech: Evidence from the comprehension of noise-vocoded sentences. Journal of Experimental Psychology-General, 134(2), 222-241. doi:10.1037/0096-3445.134.2.222.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Den Os, E., & Boves, L. (2004). Natural multimodal interaction for design applications. In P. Cunningham (Ed.), Adoption and the knowledge economy (pp. 1403-1410). Amsterdam: IOS Press.
  • Dijkstra, T., Moscoso del Prado Martín, F., Schulpen, B., Schreuder, R., & Baayen, R. H. (2005). A roommate in cream: Morphological family size effects on interlingual homograph recognition. Language and Cognitive Processes, 20, 7-41. doi:10.1080/01690960444000124.
  • Dimroth, C., & Lindner, K. (2005). Was langsame Lerner uns zeigen können: der Erwerb der Finitheit im Deutschen durch einsprachige Kinder mit spezifischen Sprachentwicklungsstörung und durch Zweit-sprach-lerner. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 40-61.
  • Dimroth, C., & Watorek, M. (2005). Additive scope particles in advanced learner and native speaker discourse. In Hendriks, & Henriëtte (Eds.), The structure of learner varieties (pp. 461-488). Berlin: Mouton de Gruyter.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dirksmeyer, T. (2005). Why do languages die? Approaching taxonomies, (re-)ordering causes. In J. Wohlgemuth, & T. Dirksmeyer (Eds.), Bedrohte Vielfalt. Aspekte des Sprach(en)tods – Aspects of language death (pp. 53-68). Berlin: Weißensee.

    Abstract

    Under what circumstances do languages die? Why has their “mortality rate” increased dramatically in the recent past? What “causes of death” can be identified for historical cases, to what extent are these generalizable, and how can they be captured in an explanatory theory? In pursuing these questions, it becomes apparent that in typical cases of language death various causes tend to interact in multiple ways. Speakers’ attitudes towards their language play a critical role in all of this. Existing categorial taxonomies do not succeed in modeling the complex relationships between these factors. Therefore, an alternative, dimensional approach is called for to more adequately address (and counter) the causes of language death in a given scenario.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drude, S. (2005). A contribuição alemã à Lingüística e Antropologia dos índios do Brasil, especialmente da Amazônia. In J. J. A. Alves (Ed.), Múltiplas Faces da Históriadas Ciência na Amazônia (pp. 175-196). Belém: EDUFPA.
  • Dunn, M., Terrill, A., Reesink, G., Foley, R. A., & Levinson, S. C. (2005). Structural phylogenetics and the reconstruction of ancient language history. Science, 309(5743), 2072-2075. doi:10.1126/science.1114615.
  • Dunn, M., & Terrill, A. (2004). Lexical comparison between Papuan languages: Inland bird and tree species. In A. Majid (Ed.), Field Manual Volume 9 (pp. 65-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492942.

    Abstract

    The Pioneers project seeks to uncover relationships between the Papuan languages of Island Melanesia. One basic way to uncover linguistic relationships, either contact or genetic, is through lexical comparison. We have seen very few shared words between our Papuan languages and any other languages, either Oceanic or Papuan, but most of the words which are shared are shared because they are commonly borrowed from Oceanic languages. This task is aimed at enabling fieldworkers to collect terms for inland bird and tree species. In the past it is has proved very difficult for non-experts to identify plant and bird species, so the task consists of a booklet of colour pictures of some of the more common species, with information on the range and habits of each species, as well as some information on their cultural uses, which should enable better identification. It is intended that fieldworkers will show this book to consultants and use it as an elicitation aid.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eisenbeiss, S. (2000). The acquisition of Determiner Phrase in German child language. In M.-A. Friedemann, & L. Rizzi (Eds.), The Acquisition of Syntax (pp. 26-62). Harlow, UK: Pearson Education Ltd.
  • Eisner, F., & McQueen, J. M. (2005). The specificity of perceptual learning in speech processing. Perception & Psychophysics, 67(2), 224-238.

    Abstract

    We conducted four experiments to investigate the specificity of perceptual adjustments made to unusual speech sounds. Dutch listeners heard a female talker produce an ambiguous fricative [?] (between [f] and [s]) in [f]- or [s]-biased lexical contexts. Listeners with [f]-biased exposure (e.g., [witlo?]; from witlof, “chicory”; witlos is meaningless) subsequently categorized more sounds on an [εf]–[εs] continuum as [f] than did listeners with [s]-biased exposure. This occurred when the continuum was based on the exposure talker's speech (Experiment 1), and when the same test fricatives appeared after vowels spoken by novel female and male talkers (Experiments 1 and 2). When the continuum was made entirely from a novel talker's speech, there was no exposure effect (Experiment 3) unless fricatives from that talker had been spliced into the exposure talker's speech during exposure (Experiment 4). We conclude that perceptual learning about idiosyncratic speech is applied at a segmental level and is, under these exposure conditions, talker specific.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2005). The body as a cognitive artifact in kinship representations: Hand gesture diagrams by speakers of Lao. Current Anthropology, 46(1), 51-81.

    Abstract

    Central to cultural, social, and conceptual life are cognitive arti-facts, the perceptible structures which populate our world and mediate our navigation of it, complementing, enhancing, and altering available affordances for the problem-solving challenges of everyday life. Much work in this domain has concentrated on technological artifacts, especially manual tools and devices and the conceptual and communicative tools of literacy and diagrams. Recent research on hand gestures and other bodily movements which occur during speech shows that the human body serves a number of the functions of "cognitive technologies," affording the special cognitive advantages claimed to be associated exclusively with enduring (e.g., printed or drawn) diagrammatic representations. The issue is explored with reference to extensive data from video-recorded interviews with speakers of Lao in Vientiane, Laos, which show integration of verbal descriptions with complex spatial representations akin to diagrams. The study has implications both for research on cognitive artifacts (namely, that the body is a visuospatial representational resource not to be overlooked) and for research on ethnogenealogical knowledge (namely, that hand gestures reveal speakers' conceptualizations of kinship structure which are of a different nature to and not necessarily retrievable from the accompanying linguistic code).
  • Enfield, N. J., Levinson, S. C., De Ruiter, J. P., & Stivers, T. (2004). Building a corpus of multimodal interaction in your field site. In A. Majid (Ed.), Field Manual Volume 9 (pp. 32-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506951.

    Abstract

    This Field Manual entry has been superceded by the 2007 version:
    https://doi.org/10.17617/2.468728

    Files private

    Request files
  • Enfield, N. J. (2005). Depictive and other secondary predication in Lao. In N. P. Himmelmann, & E. Schultze-Berndt (Eds.), Secondary predication and adverbial modification (pp. 379-392). Oxford: Oxford University Press.
  • Enfield, N. J. (2004). Adjectives in Lao. In R. M. W. Dixon, & A. Y. Aikhenvald (Eds.), Adjective classes: A cross-linguistic typology (pp. 323-347). Oxford: Oxford University Press.
  • Enfield, N. J. (2005). Areal linguistics and mainland Southeast Asia. Annual Review of Anthropology, 34, 181-206. doi:10.1146/annurev.anthro.34.081804.120406.
  • Enfield, N. J. (2005). [Comment on the book Explorations in the deictic field]. Current Anthropology, 46(2), 212-212.
  • Enfield, N. J. (2005). [Review of the book Laughter in interaction by Philip Glenn]. Linguistics, 43(6), 1195-1197. doi:10.1515/ling.2005.43.6.1191.
  • Enfield, N. J. (2005). Micro and macro dimensions in linguistic systems. In S. Marmaridou, K. Nikiforidou, & E. Antonopoulou (Eds.), Reviewing linguistic thought: Converging trends for the 21st Century (pp. 313-326). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (2000). On linguocentrism. In M. Pütz, & M. H. Verspoor (Eds.), Explorations in linguistic relativity (pp. 125-157). Amsterdam: Benjamins.
  • Enfield, N. J. (2004). Repair sequences in interaction. In A. Majid (Ed.), Field Manual Volume 9 (pp. 48-52). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492945.

    Abstract

    This Field Manual entry has been superceded by the 2007 version: https://doi.org/10.17617/2.468724

    Files private

    Request files
  • Enfield, N. J. (2005). Review of the book [The Handbook of Historical Linguistics, edited by Brian D. Joseph and Richard D. Janda]. Linguistics, 43(6), 1191-1197. doi:10.1515/ling.2005.43.6.1191.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Ernestus, M., Mak, W. M., & Baayen, R. H. (2005). Waar 't kofschip strandt. Levende Talen Magazine, 92, 9-11.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Mak, W. M. (2005). Analogical effects in reading Dutch verb forms. Memory & Cognition, 33(7), 1160-1173.

    Abstract

    Previous research has shown that the production of morphologically complex words in isolation is affected by the properties of morphologically, phonologically, or semantically similar words stored in the mental lexicon. We report five experiments with Dutch speakers that show that reading an inflectional word form in its linguistic context is also affected by analogical sets of formally similar words. Using the self-paced reading technique, we show in Experiments 1-3 that an incorrectly spelled suffix delays readers less if the incorrect spelling is in line with the spelling of verbal suffixes in other inflectional forms of the same verb. In Experiments 4 and 5, our use of the self-paced reading technique shows that formally similar words with different stems affect the reading of incorrect suffixal allomorphs on a given stem. These intra- and interparadigmatic effects in reading may be due to online processes or to the storage of incorrect forms resulting from analogical effects in production.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Evans, N., Levinson, S. C., Enfield, N. J., Gaby, A., & Majid, A. (2004). Reciprocal constructions and situation type. In A. Majid (Ed.), Field Manual Volume 9 (pp. 25-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506955.
  • Fisher, S. E. (2005). Dissection of molecular mechanisms underlying speech and language disorders. Applied Psycholinguistics, 26, 111-128. doi:10.1017/S0142716405050095.

    Abstract

    Developmental disorders affecting speech and language are highly heritable, but very little is currently understood about the neuromolecular mechanisms that underlie these traits. Integration of data from diverse research areas, including linguistics, neuropsychology, neuroimaging, genetics, molecular neuroscience, developmental biology, and evolutionary anthropology, is becoming essential for unraveling the relevant pathways. Recent studies of the FOXP2 gene provide a case in point. Mutation of FOXP2 causes a rare form of speech and language disorder, and the gene appears to be a crucial regulator of embryonic development for several tissues. Molecular investigations of the central nervous system indicate that the gene may be involved in establishing and maintaining connectivity of corticostriatal and olivocerebellar circuits in mammals. Notably, it has been shown that FOXP2 was subject to positive selection in recent human evolution. Consideration of findings from multiple levels of analysis demonstrates that FOXP2 cannot be characterized as “the gene for speech,” but rather as one critical piece of a complex puzzle. This story gives a flavor of what is to come in this field and indicates that anyone expecting simple explanations of etiology or evolution should be prepared for some intriguing surprises.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E. (2005). On genes, speech, and language. The New England Journal of Medicine: NEJM / Publ. by the Massachusetts Medical Society, 353, 1655-1657. doi:10.1056/NEJMp058207.

    Abstract

    Learning to talk is one of the most important milestones in human development, but we still have only a limited understanding of the way in which the process occurs. It normally takes just a few years to go from babbling newborn to fluent communicator. During this period, the child learns to produce a rich array of speech sounds through intricate control of articulatory muscles, assembles a vocabulary comprising thousands of words, and deduces the complicated structural rules that permit construction of meaningful sentences. All of this (and more) is achieved with little conscious effort.

    Files private

    Request files
  • Forkstam, C., & Petersson, K. M. (2005). Towards an explicit account of implicit learning. Current Opinion in Neurology, 18(4), 435-441.

    Abstract

    Purpose of review: The human brain supports acquisition mechanisms that can extract structural regularities implicitly from experience without the induction of an explicit model. Reber defined the process by which an individual comes to respond appropriately to the statistical structure of the input ensemble as implicit learning. He argued that the capacity to generalize to new input is based on the acquisition of abstract representations that reflect underlying structural regularities in the acquisition input. We focus this review of the implicit learning literature on studies published during 2004 and 2005. We will not review studies of repetition priming ('implicit memory'). Instead we focus on two commonly used experimental paradigms: the serial reaction time task and artificial grammar learning. Previous comprehensive reviews can be found in Seger's 1994 article and the Handbook of Implicit Learning. Recent findings: Emerging themes include the interaction between implicit and explicit processes, the role of the medial temporal lobe, developmental aspects of implicit learning, age-dependence, the role of sleep and consolidation. Summary: The attempts to characterize the interaction between implicit and explicit learning are promising although not well understood. The same can be said about the role of sleep and consolidation. Despite the fact that lesion studies have relatively consistently suggested that the medial temporal lobe memory system is not necessary for implicit learning, a number of functional magnetic resonance studies have reported medial temporal lobe activation in implicit learning. This issue merits further research. Finally, the clinical relevance of implicit learning remains to be determined.
  • Fradera, A., & Sauter, D. (2004). Make yourself happy. In T. Stafford, & M. Webb (Eds.), Mind hacks: tips & tools for using your brain (pp. 325-327). Sebastopol, CA: O'Reilly.

    Abstract

    Turn on your affective system by tweaking your face muscles - or getting an eyeful of someone else doing the same.
  • Fradera, A., & Sauter, D. (2004). Reminisce hot and cold. In T. Stafford, & M. Webb (Eds.), Mind hacks: tips & tools for using your brain (pp. 327-331). Sebastopol, CA: O'Reilly.

    Abstract

    Find the fire that's cooking your memory systems.
  • Fradera, A., & Sauter, D. (2004). Signal emotion. In T. Stafford, & M. Webb (Eds.), Mind hacks: tips & tools for using your brain (pp. 320-324). Sebastopol, CA: O'Reilly.

    Abstract

    Emotions are powerful on the inside but often displayed in subtle ways on the outside. Are these displays culturally dependent or universal?
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.
  • Francks, C., Paracchini, S., Smith, S. D., Richardson, A. J., Scerri, T. S., Cardon, L. R., Marlow, A. J., MacPhie, I. L., Walter, J., Pennington, B. F., Fisher, S. E., Olson, R. K., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2004). A 77-kilobase region of chromosome 6p22.2 is associated with dyslexia in families from the United Kingdom and from the United States. American Journal of Human Genetics, 75(6), 1046-1058. doi:10.1086/426404.

    Abstract

    Several quantitative trait loci (QTLs) that influence developmental dyslexia (reading disability [RD]) have been mapped to chromosome regions by linkage analysis. The most consistently replicated area of linkage is on chromosome 6p23-21.3. We used association analysis in 223 siblings from the United Kingdom to identify an underlying QTL on 6p22.2. Our association study implicates a 77-kb region spanning the gene TTRAP and the first four exons of the neighboring uncharacterized gene KIAA0319. The region of association is also directly upstream of a third gene, THEM2. We found evidence of these associations in a second sample of siblings from the United Kingdom, as well as in an independent sample of twin-based sibships from Colorado. One main RD risk haplotype that has a frequency of ∼12% was found in both the U.K. and U.S. samples. The haplotype is not distinguished by any protein-coding polymorphisms, and, therefore, the functional variation may relate to gene expression. The QTL influences a broad range of reading-related cognitive abilities but has no significant impact on general cognitive performance in these samples. In addition, the QTL effect may be largely limited to the severe range of reading disability.
  • Friederici, A., & Levelt, W. J. M. (1988). Sprache. In K. Immelmann, K. Scherer, C. Vogel, & P. Schmook (Eds.), Psychobiologie: Grundlagen des Verhaltens (pp. 648-671). Stuttgart: Fischer.
  • Gaby, A. R. (2005). Some participants are more equal than others: Case and the composition of arguments in Kuuk Thaayorre. In M. Amberber, & H. d. Hoop (Eds.), Competition and variation in natural languages: the case for the case (pp. 9-39). Amsterdam: Elsevier.
  • Gaby, A. R. (2004). Extended functions of Thaayorre body part terms. Papers in Linguistics and Applied Linguistics, 4(2), 24-34.
  • Gayán, J., Willcutt, E. G., Fisher, S. E., Francks, C., Cardon, L. R., Olson, R. K., Pennington, B. F., Smith, S., Monaco, A. P., & DeFries, J. C. (2005). Bivariate linkage scan for reading disability and attention-deficit/hyperactivity disorder localizes pleiotropic loci. Journal of Child Psychology and Psychiatry, 46(10), 1045-1056. doi:10.1111/j.1469-7610.2005.01447.x.

    Abstract

    BACKGROUND: There is a growing interest in the study of the genetic origins of comorbidity, a direct consequence of the recent findings of genetic loci that are seemingly linked to more than one disorder. There are several potential causes for these shared regions of linkage, but one possibility is that these loci may harbor genes with manifold effects. The established genetic correlation between reading disability (RD) and attention-deficit/hyperactivity disorder (ADHD) suggests that their comorbidity is due at least in part to genes that have an impact on several phenotypes, a phenomenon known as pleiotropy. METHODS: We employ a bivariate linkage test for selected samples that could help identify these pleiotropic loci. This linkage method was employed to carry out the first bivariate genome-wide analysis for RD and ADHD, in a selected sample of 182 sibling pairs. RESULTS: We found evidence for a novel locus at chromosome 14q32 (multipoint LOD=2.5; singlepoint LOD=3.9) with a pleiotropic effect on RD and ADHD. Another locus at 13q32, which had been implicated in previous univariate scans of RD and ADHD, seems to have a pleiotropic effect on both disorders. 20q11 is also suggested as a pleiotropic locus. Other loci previously implicated in RD or ADHD did not exhibit bivariate linkage. CONCLUSIONS: Some loci are suggested as having pleiotropic effects on RD and ADHD, while others might have unique effects. These results highlight the utility of this bivariate linkage method to study pleiotropy.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gisselgard, J., Petersson, K. M., & Ingvar, M. (2004). The irrelevant speech effect and working memory load. NeuroImage, 22, 1107-1116. doi:10.1016/j.neuroimage.2004.02.031.

    Abstract

    Irrelevant speech impairs the immediate serial recall of visually presented material. Previously, we have shown that the irrelevant speech effect (ISE) was associated with a relative decrease of regional blood flow in cortical regions subserving the verbal working memory, in particular the superior temporal cortex. In this extension of the previous study, the working memory load was increased and an increased activity as a response to irrelevant speech was noted in the dorsolateral prefrontal cortex. We suggest that the two studies together provide some basic insights as to the nature of the irrelevant speech effect. Firstly, no area in the brain can be ascribed as the single locus of the irrelevant speech effect. Instead, the functional neuroanatomical substrate to the effect can be characterized in terms of changes in networks of functionally interrelated areas. Secondly, the areas that are sensitive to the irrelevant speech effect are also generically activated by the verbal working memory task itself. Finally, the impact of irrelevant speech and related brain activity depends on working memory load as indicated by the differences between the present and the previous study. From a brain perspective, the irrelevant speech effect may represent a complex phenomenon that is a composite of several underlying mechanisms, which depending on the working memory load, include top-down inhibition as well as recruitment of compensatory support and control processes. We suggest that, in the low-load condition, a selection process by an inhibitory top-down modulation is sufficient, whereas in the high-load condition, at or above working memory span, auxiliary adaptive cognitive resources are recruited as compensation
  • Gonzalez da Silva, C., Petersson, K. M., Faísca, L., Ingvar, M., & Reis, A. (2004). The effects of literacy and education on the quantitative and qualitative aspects of semantic verbal fluency. Journal of Clinical and Experimental Neuropsychology, 26(2), 266-277. doi:10.1076/jcen.26.2.266.28089.

    Abstract

    Semantic verbal fluency tasks are commonly used in neuropsychological assessment. Investigations of the influence of level of literacy have not yielded consistent results in the literature. This prompted us to investigate the ecological relevance of task specifics, in particular, the choice of semantic criteria used. Two groups of literate and illiterate subjects were compared on two verbal fluency tasks using different semantic criteria. The performance on a food criterion (supermarket fluency task), considered more ecologically relevant for the two literacy groups, and an animal criterion (animal fluency task) were compared. The data were analysed using both quantitative and qualitative measures. The quantitative analysis indicated that the two literacy groups performed equally well on the supermarket fluency task. In contrast, results differed significantly during the animal fluency task. The qualitative analyses indicated differences between groups related to the strategies used, especially with respect to the animal fluency task. The overall results suggest that there is not a substantial difference between literate and illiterate subjects related to the fundamental workings of semantic memory. However, there is indication that the content of semantic memory reflects differences in shared cultural background - in other words, formal education –, as indicated by the significant interaction between level of literacy and semantic criterion.
  • Goudbeek, M., Smits, R., Cutler, A., & Swingley, D. (2005). Acquiring auditory and phonetic categories. In H. Cohen, & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (pp. 497-513). Amsterdam: Elsevier.
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Gretsch, P. (2004). What does finiteness mean to children? A cross-linguistic perspective onroot infinitives. Linguistics, 42(2), 419-468. doi:10.1515/ling.2004.014.

    Abstract

    The discussion on root infinitives has mainly centered around their supposed modal usage. This article aims at modelling the form-function relation of the root infinitive phenomenon by taking into account the full range of interpretational facets encountered cross-linguistically and interindividually. Following the idea of a subsequent ‘‘cell partitioning’’ in the emergence of form-function correlations, I claim that it is the major fission between [+-finite] which is central to express temporal reference different from the default here&now in tense-oriented languages. In aspectual-oriented languages, a similar opposition is mastered with the marking of early aspectual forms. It is observed that in tense-oriented languages like Dutch and German, the progression of functions associated with the infinitival form proceeds from nonmodal to modal, whereas the reverse progression holds for the Russian infinitive. Based on this crucial observation, a model of acquisition is proposed which allows for a flexible and systematic relationship between morphological forms and their respective interpretational biases dependent on their developmental context. As for early child language, I argue that children entertain only two temporal parameters: one parameter is fixed to the here&now point in time, and a second parameter relates to the time talked about, the topic time; this latter time overlaps the situation time as long as no empirical evidence exists to support the emergence of a proper distinction between tense and aspect.

    Files private

    Request files
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Le Guen, O. (2005). Geografía de lo sagrado entre los Mayas Yucatecos de Quintana Roo: configuración del espacio y su aprendizaje entre los niños. Ketzalcalli, 2005(1), 54-68.
  • Guerrero, L., & Van Valin Jr., R. D. (2004). Yaqui and the analysis of primary object languages. International Journal of American Linguistics, 70(3), 290-319. doi:10.1086/425603.

    Abstract

    The central topic of this study is to investigate three- and four-place predicate in Yaqui, which are characterized by having multiple object arguments. As with other Southern Uto-Aztecan languages, it has been said that Yaqui follows the Primary/Secondary Object pattern (Dryer 1986). Actually, Yaqui presents three patterns: verbs like nenka ‘sell’ follow the direct–indirect object pattern, verbs like miika ‘give’ follow the primary object pattern, and verbs like chijakta ‘sprinkle’ follow the locative alternation pattern; the primary object pattern is the exclusive one found with derived verbs. This paper shows that the contrast between direct object and primary object languages is not absolute but rather one of degree, and hence two “object” selection principles are needed to explain this mixed system. The two principles are not limited to Yaqui but are found in other languages as well, including English.
  • Gullberg, M. (2004). [Review of the book Pointing: Where language, culture and cognition meet ed. by Sotaro Kita]. Gesture, 4(2), 235-248. doi:10.1075/gest.4.2.08gul.
  • Gullberg, M. (2005). L'expression orale et gestuelle de la cohésion dans le discours de locuteurs langue 2 débutants. AILE, 23, 153-172.
  • Gumperz, J. J., & Levinson, S. C. (1991). Rethinking linguistic relativity. Current Anthropology, 32(5), 613-623. Retrieved from http://www.jstor.org/stable/2743696.
  • De Haan, E., & Hagoort, P. (2004). Het brein in beeld. In B. Deelman, P. Eling, E. De Haan, & E. Van Zomeren (Eds.), Klinische neuropsychologie (pp. 82-98). Amsterdam: Boom.
  • Hagoort, P. (2005). On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9(9), 416-423. doi:10.1016/j.tics.2005.07.004.

    Abstract

    In speaking and comprehending language, word information is retrieved from memory and combined into larger units (unification). Unification operations take place in parallel at the semantic, syntactic and phonological levels of processing. This article proposes a new framework that connects psycholinguistic models to a neurobiological account of language. According to this proposal the left inferior frontal gyrus (LIFG) plays an important role in unification. Research in other domains of cognition indicates that left prefrontal cortex has the necessary neurobiological characteristics for its involvement in the unification for language. I offer here a psycholinguistic perspective on the nature of language unification and the role of LIFG.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2005). De talige aap. Linguaan, 26-35.

Share this page