Publications

Displaying 1 - 100 of 362
  • Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.

    Abstract

    At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
  • Ameka, F. K. (1987). A comparative analysis of linguistic routines in two languages: English and Ewe. Journal of Pragmatics, 11(3), 299-326. doi:10.1016/0378-2166(87)90135-4.

    Abstract

    It is very widely acknowledged that linguistic routines are not only embodiments of the sociocultural values of speech communities that use them, but their knowledge and appropriate use also form an essential part of a speaker's communicative/pragmatic competence. Despite this, many studies concentrate more on describing the use of routines rather than explaining the socio-cultural aspects of their meaning and the way they affect their use. It is the contention of this paper that there is the need to go beyond descriptions to explanations and explications of the use and meaning of routines that are culturally and socially revealing. This view is illustrated by a comparative analysis of functionally equivalent formulaic expressions in English and Ewe. The similarities are noted and the differences explained in terms of the socio-cultural traditions associated with the respective languages. It is argued that insights gained from such studies are valuable for crosscultural understanding and communication as well as for second language pedagogy.
  • Ameka, F. K. (1990). [Review of Robert Burchfield (ed.) Studies in lexicography]. Studies in Language, 14(2), 479-489.
  • Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.

    Abstract

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD.
  • Bauer, B. L. M. (2000). Archaic syntax in Indo-European: The spread of transitivity in Latin and French. Berlin: Mouton de Gruyter.

    Abstract

    Several grammatical features in early Indo-European traditionally have not been understood. Although Latin, for example, was a nominative language, a number of its inherited characteristics do not fit that typology and are difficult to account for, such as stative mihi est constructions to express possession, impersonal verbs, or absolute constructions. With time these archaic features have been replaced by transitive structures (e.g. possessive ‘have’). This book presents an extensive comparative and historical analysis of archaic features in early Indo-European languages and their gradual replacement in the history of Latin and early Romance, showing that the new structures feature transitive syntax and fit the patterns of a nominative language.
  • Bauer, B. L. M. (1994). [Review of the book Du latin aux langues romanes ed. by Maria Iliescu and Dan Slusanski]. Studies in Language, 18(2), 502-509. doi:10.1075/sl.18.2.08bau.
  • Bauer, B. L. M. (2000). From Latin to French: The linear development of word order. In B. Bichakjian, T. Chernigovskaya, A. Kendon, & A. Müller (Eds.), Becoming Loquens: More studies in language origins (pp. 239-257). Frankfurt am Main: Lang.
  • Bauer, B. L. M. (1987). L’évolution des structures morphologiques et syntaxiques du latin au français. Travaux de linguistique, 14-15, 95-107.
  • Bauer, B. L. M. (1994). The development of Latin absolute constructions: From stative to transitive structures. General Linguistics, 18, 64-83.
  • Bavin, E. L., & Kidd, E. (2000). Learning new verbs: Beyond the input. In C. Davis, T. J. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society.
  • Bickel, B. (1994). In the vestibule of meaning: Transivity inversion as a morphological phenomenon. Studies in Language, 19(1), 73-127.
  • Blomert, L., & Hagoort, P. (1987). Neurobiologische en neuropsychologische aspecten van dyslexie. In J. Hamers, & A. Van der Leij (Eds.), Dyslexie 87 (pp. 35-44). Lisse: Swets and Zeitlinger.
  • Bock, K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M. A. Gernsbacher (Ed.), Handbook of Psycholinguistics (pp. 945-984). San Diego,: Academic Press.
  • Bohnemeyer, J. (2000). Event order in language and cognition. Linguistics in the Netherlands, 17(1), 1-16. doi:10.1075/avt.17.04boh.
  • Bohnemeyer, J. (2000). Where do pragmatic meanings come from? In W. Spooren, T. Sanders, & C. van Wijk (Eds.), Samenhang in Diversiteit; Opstellen voor Leo Noorman, aangeboden bij gelegenheid van zijn zestigste verjaardag (pp. 137-153).
  • Bouman, M. A., & Levelt, W. J. M. (1994). Werner E. Reichardt: Levensbericht. In H. W. Pleket (Ed.), Levensberichten en herdenkingen 1993 (pp. 75-80). Amsterdam: Koninklijke Nederlandse Akademie van Wetenschappen.
  • Bowerman, M. (1975). Cross linguistic similarities at two stages of syntactic development. In E. Lenneberg, & E. Lenneberg (Eds.), Foundations of language development: A multidisciplinary approach (pp. 267-282). New York: Academic Press.
  • Bowerman, M. (1975). Commentary on L. Bloom, P. Lightbown, & L. Hood, “Structure and variation in child language”. Monographs of the Society for Research in Child Development, 40(2), 80-90. Retrieved from http://www.jstor.org/stable/1165986.
  • Bowerman, M. (1987). Commentary: Mechanisms of language acquisition. In B. MacWhinney (Ed.), Mechanisms of language acquisition (pp. 443-466). Hillsdale, N.J.: Lawrence Erlbaum.
  • Bowerman, M. (1994). From universal to language-specific in early grammatical development. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 346, 34-45. doi:10.1098/rstb.1994.0126.

    Abstract

    Attempts to explain children's grammatical development often assume a close initial match between units of meaning and units of form; for example, agents are said to map to sentence-subjects and actions to verbs. The meanings themselves, according to this view, are not influenced by language, but reflect children's universal non-linguistic way of understanding the world. This paper argues that, contrary to this position, meaning as it is expressed in children's early sentences is, from the beginning, organized on the basis of experience with the grammar and lexicon of a particular language. As a case in point, children learning English and Korean are shown to express meanings having to do with direct motion according to language-specific principles of semantic and grammatical structuring from the earliest stages of word combination
  • Bowerman, M. (1990). Mapping thematic roles onto syntactic functions: Are children helped by innate linking rules? Linguistics, 28, 1253-1290. doi:10.1515/ling.1990.28.6.1253.

    Abstract

    In recent theorizing about language acquisition, children have often been credited with innate knowledge of rules that link thematic roles such as agent and patient to syntactic functions such as subject and direct object. These rules form the basis for the hypothesis that phrase-structure rules are established through 'semantic bootstrapping', and they are also invoked to explain the acquisition of verb subcategorization frames (for example, Pinker 1984). This study examines two versions of the hypothesis that linking rules are innate, pitting them against the alternative hypothesis that linking patterns are learned (as proposed, for example, by Foley and Van Valin 1984). The first version specifies linking rules through paired thematicsyntactic role hierarchies, and the second characterizes them as a function of verb semantic structure. When predictions of the two approaches are drawn out and tested against longitudinal spontaneous speech data from two children learning English, no support is found for the hypothesis that knowledge of linking is innate; ironically, in fact, the children had more trouble with verbs that should be easy to link than with those that should be more difficult. In contrast, the hypothesis that linking rules are learned is supported: at a relatively advanced age, the children began to produce errors that are best interpreted as overregularizations of a statistically predominant linking pattern to which they had become sensitive through linguistic experience.
  • Bowerman, M. (1994). Learning a semantic system: What role do cognitive predispositions play? [Reprint]. In P. Bloom (Ed.), Language acquisition: Core readings (pp. 329-363). Cambridge, MA: MIT Press.

    Abstract

    Reprint from: Bowerman, M. (1989). Learning a semantic system: What role do cognitive predispositions play? In M.L. Rice & R.L Schiefelbusch (Ed.), The teachability of language (pp. 133-169). Baltimore: Paul H. Brookes.
  • Bowerman, M., & Perdue, C. (1990). Introduction to the special issue. Linguistics, 28(6), 1131-1133. doi:10.1515/ling.1990.28.6.1131.

    Abstract

    This thematic issue contains 11 papers first presented at a conference on The Structure of the Simple Clause in Language Acquisition', held at the Max Planck Institute in Nijmegen from November 9-13, 1987. The issue concentrates on first-language acquisition. Papers on developmental dysphasia. creolization, and adult language acquisition were also presented at the conference and are being published elsewhere.
  • Bowerman, M. (1980). The structure and origin of semantic categories in the language learning child. In M. Foster, & S. Brandes (Eds.), Symbol as sense (pp. 277-299). New York: Academic Press.
  • Bowerman, M., & Perdue, C. (Eds.). (1990). The structure of the simple clause in language acquisition [Special Issue]. Linguistics, 28(6).
  • Bowerman, M. (2000). Where do children's word meanings come from? Rethinking the role of cognition in early semantic development. In L. Nucci, G. Saxe, & E. Turiel (Eds.), Culture, thought and development (pp. 199-230). Mahwah, NJ: Lawrence Erlbaum.
  • Brown, C. M., Van Berkum, J. J. A., & Hagoort, P. (2000). Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding. Journal of Psycholinguistic Research, 29(1), 53-68. doi:10.1023/A:1005172406969.

    Abstract

    A study is presented on the effects of discourse–semantic and lexical–syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior discourse–semantic information biased toward one analysis of the temporary ambiguity, whereas the lexical-syntactic information allowed only for the alternative analysis. The ERP results show that discourse–semantic information can momentarily take precedence over syntactic information, even if this violates grammatical gender agreement rules.
  • Brown, C. M., Hagoort, P., & Chwilla, D. J. (2000). An event-related brain potential analysis of visual word priming effects. Brain and Language, 72, 158-190. doi:10.1006/brln.1999.2284.

    Abstract

    Two experiments are reported that provide evidence on task-induced effects during
    visual lexical processing in a primetarget semantic priming paradigm. The research focuses on target expectancy effects by manipulating the proportion of semantically related and unrelated word pairs. In Experiment 1, a lexical decision task was used and reaction times (RTs) and event-related brain potentials (ERPs) were obtained. In Experiment 2, subjects silently read the stimuli, without any additional task demands, and ERPs were recorded. The RT and ERP results of Experiment 1 demonstrate that an expectancy mechanism contributed to the priming effect when a high proportion of related word pairs was presented. The ERP results of Experiment 2 show that in the absence of extraneous task requirements, an expectancy mechanism is not active. However, a standard ERP semantic priming effect was obtained in Experiment 2. The combined results show that priming effects due to relatedness proportion are induced by task demands and are not a standard aspect of online lexical processing.
  • Brown, P. (2000). ’He descended legs-upwards‘: Position and motion in Tzeltal frog stories. In E. V. Clark (Ed.), Proceedings of the 30th Stanford Child Language Research Forum (pp. 67-75). Stanford: CSLI.

    Abstract

    How are events framed in narrative? Speakers of English (a 'satellite-framed' language), when 'reading' Mercer Mayer's wordless picture book 'Frog, Where Are You?', find the story self-evident: a boy has a dog and a pet frog; the frog escapes and runs away; the boy and dog look for it across hill and dale, through woods and over a cliff, until they find it and return home with a baby frog child of the original pet frog. In Tzeltal, as spoken in a Mayan community in southern Mexico, the story is somewhat different, because the language structures event descriptions differently. Tzeltal is in part a 'verb-framed' language with a set of Path-encoding motion verbs, so that the bare bones of the Frog story can consist of verbs translating as 'go'/'pass by'/'ascend'/ 'descend'/ 'arrive'/'return'. But Tzeltal also has satellite-framing adverbials, grammaticized from the same set of motion verbs, which encode the direction of motion or the orientation of static arrays. Furthermore, vivid pictorial detail is provided by positional verbs which can describe the position of the Figure as an outcome of a motion event; motion and stasis are thereby combined in a single event description. (For example:  jipot jawal "he has been thrown (by the deer) lying­_face_upwards_spread-eagled". This paper compares the use of these three linguistic resources in Frog Story narratives from  Tzeltal adults and children, looks at their development in the narratives of children, and considers the results in relation to those from Berman and Slobin's (1996) comparative study of adult and child Frog stories.
  • Brown, P., & Levinson, S. C. (2000). Frames of spatial reference and their acquisition in Tenejapan Tzeltal. In L. Nucci, G. Saxe, & E. Turiel (Eds.), Culture, thought, and development (pp. 167-197). Mahwah, NJ: Erlbaum.
  • Brown, P. (1990). Gender, politeness and confrontation in Tenejapa. Discourse Processes, 13, 123-141.

    Abstract

    This paper compares some features of the interactional details of a Tenejapan (Mexico) court case with the features of social interaction characteristic of ordinary, casual encounters in this society. It is suggested that courtroom behaviour in Tenejapa is a very special form of interaction, in a context that uniquely allows for direct face-to-face confrontation in a society where a premium is placed on interactional restraint. Courtroom speech in Tenejapa directly contraverts norms and conventions that operate in other contexts, and women’s conventionalized polite ‘ways of putting things’ are here used sarcastically to be impolite. Thus, in this society, gender operates across contexts as a ‘master status’, but with gender meanings transformed in the different contexts: forms associated with superficial cooperation and agreement being used to emphasize lack of cooperation, disagreement, and hostility. The implications of this Tenejapan phenomenon for our understanding of the nature of relations between language and gender are explored.
  • Brown, P. (1980). How and why are women more polite: Some evidence from a Mayan community. In S. McConnell-Ginet, R. Borker, & N. Furman (Eds.), Women and language in literature and society (pp. 111-136). New York: Praeger.
  • Brown, C. M., & Hagoort, P. (2000). On the electrophysiology of language comprehension: Implications for the human language system. In M. W. Crocker, M. Pickering, & C. Clifton jr. (Eds.), Architectures and mechanisms for language processing (pp. 213-237). Cambridge University Press.
  • Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge University Press.

    Abstract

    This study is about the principles for constructing polite speech. The core of it was published as Brown and Levinson (1978); here it is reissued with a new introduction which surveys the now considerable literature in linguistics, psychology and the social sciences that the original extended essay stimulated, and suggests new directions for research. We describe and account for some remarkable parallelisms in the linguistic construction of utterances with which people express themselves in different languges and cultures. A motive for these parallels is isolated - politeness, broadly defined to include both polite friendliness and polite formality - and a universal model is constructed outlining the abstract principles underlying polite usages. This is based on the detailed study of three unrelated languages and cultures: the Tamil of south India, the Tzeltal spoken by Mayan Indians in Chiapas, Mexico, and the English of the USA and England, supplemented by examples from other cultures. Of general interest is the point that underneath the apparent diversity of polite behaviour in different societies lie some general pan-human principles of social interaction, and the model of politeness provides a tool for analysing the quality of social relations in any society.
  • Brown, C. M., Hagoort, P., & Kutas, M. (2000). Postlexical integration processes during language comprehension: Evidence from brain-imaging research. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (2nd., pp. 881-895). Cambridge, MA: MIT Press.
  • Brown, P. (1994). The INs and ONs of Tzeltal locative expressions: The semantics of static descriptions of location. Linguistics, 32, 743-790.

    Abstract

    This paper explores how static topological spatial relations such as contiguity, contact, containment, and support are expressed in the Mayan language Tzeltal. Three distinct Tzeltal systems for describing spatial relationships - geographically anchored (place names, geographical coordinates), viewer-centered (deictic), and object-centered (body parts, relational nouns, and dispositional adjectives) - are presented, but the focus here is on the object-centered system of dispositional adjectives in static locative expressions. Tzeltal encodes shape/position/configuration gestalts in verb roots; predicates formed from these are an essential element in locative descriptions. Specificity of shape in the predicate allows spatial reltaions between figure and ground objects to be understood by implication. Tzeltal illustrates an alternative stragegy to that of prepositional languages like English: rather than elaborating shape distinctions in the nouns and minimizing them in the locatives, Tzeltal encodes shape and configuration very precisely in verb roots, leaving many object nouns unspecified for shape. The Tzeltal case thus presents a direct challenge to cognitive science claims that, in both languge and cognition, WHAT is kept distinct from WHERE.
  • Butterfield, S., & Cutler, A. (1990). Intonational cues to word boundaries in clear speech? In Proceedings of the Institute of Acoustics: Vol 12, part 10 (pp. 87-94). St. Albans, Herts.: Institute of Acoustics.
  • Carlsson, K., Petrovic, P., Skare, S., Petersson, K. M., & Ingvar, M. (2000). Tickling expectations: Neural processing in anticipation of a sensory stimulus. Journal of Cognitive Neuroscience, 12(4), 691-703. doi:10.1162/089892900562318.
  • Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A. (1987). Components of prosodic effects in speech recognition. In Proceedings of the Eleventh International Congress of Phonetic Sciences: Vol. 1 (pp. 84-87). Tallinn: Academy of Sciences of the Estonian SSR, Institute of Language and Literature.

    Abstract

    Previous research has shown that listeners use the prosodic structure of utterances in a predictive fashion in sentence comprehension, to direct attention to accented words. Acoustically identical words spliced into sentence contexts arc responded to differently if the prosodic structure of the context is \ aricd: when the preceding prosody indicates that the word will he accented, responses are faster than when the preceding prosodv is inconsistent with accent occurring on that word. In the present series of experiments speech hybridisation techniques were first used to interchange the timing patterns within pairs of prosodic variants of utterances, independently of the pitch and intensity contours. The time-adjusted utterances could then serve as a basis lor the orthogonal manipulation of the three prosodic dimensions of pilch, intensity and rhythm. The overall pattern of results showed that when listeners use prosody to predict accent location, they do not simply rely on a single prosodic dimension, hut exploit the interaction between pitch, intensity and rhythm.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A., & Butterfield, S. (1990). Durational cues to word boundaries in clear speech. Speech Communication, 9, 485-495.

    Abstract

    One of a listener’s major tasks in understanding continuous speech in segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately clear speech. We found that speakers do indeed attempt to makr word boundaries; moreover, they differentiate between word boundaries in a way which suggest they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., McQueen, J. M., & Robinson, K. (1990). Elizabeth and John: Sound patterns of men’s and women’s names. Journal of Linguistics, 26, 471-482. doi:10.1017/S0022226700014754.
  • Cutler, A. (1980). Errors of stress and intonation. In V. A. Fromkin (Ed.), Errors in linguistic performance: Slips of the tongue, ear, pen and hand (pp. 67-80). New York: Academic Press.
  • Cutler, A. (1990). From performance to phonology: Comments on Beckman and Edwards's paper. In J. Kingston, & M. Beckman (Eds.), Papers in laboratory phonology I: Between the grammar and physics of speech (pp. 208-214). Cambridge: Cambridge University Press.
  • Cutler, A. (1994). How human speech recognition is affected by phonological diversity among languages. In R. Togneri (Ed.), Proceedings of the fifth Australian International Conference on Speech Science and Technology: Vol. 1 (pp. 285-288). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Listeners process spoken language in ways which are adapted to the phonological structure of their native language. As a consequence, non-native speakers do not listen to a language in the same way as native speakers; moreover, listeners may use their native language listening procedures inappropriately with foreign input. With sufficient experience, however, it may be possible to inhibit this latter (counter-productive) behavior.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A., Norris, D., & McQueen, J. M. (1994). Modelling lexical access from continuous speech input. Dokkyo International Review, 7, 193-215.

    Abstract

    The recognition of speech involves the segmentation of continuous utterances into their component words. Cross-linguistic evidence is briefly reviewed which suggests that although there are language-specific solutions to this segmentation problem, they have one thing in common: they are all based on language rhythm. In English, segmentation is stress-based: strong syllables are postulated to be the onsets of words. Segmentation, however, can also be achieved by a process of competition between activated lexical hypotheses, as in the Shortlist model. A series of experiments is summarised showing that segmentation of continuous speech depends on both lexical competition and a metrically-guided procedure. In the final section, the implementation of metrical segmentation in the Shortlist model is described: the activation of lexical hypotheses matching strong syllables in the input is boosted and that of hypotheses mismatching strong syllables in the input is penalised.
  • Cutler, A., & Otake, T. (1994). Mora or phoneme? Further evidence for language-specific listening. Journal of Memory and Language, 33, 824-844. doi:10.1006/jmla.1994.1039.

    Abstract

    Japanese listeners detect speech sound targets which correspond precisely to a mora (a phonological unit which is the unit of rhythm in Japanese) more easily than targets which do not. English listeners detect medial vowel targets more slowly than consonants. Six phoneme detection experiments investigated these effects in both subject populations, presented with native- and foreign-language input. Japanese listeners produced faster and more accurate responses to moraic than to nonmoraic targets both in Japanese and, where possible, in English; English listeners responded differently. The detection disadvantage for medial vowels appeared with English listeners both in English and in Japanese; again, Japanese listeners responded differently. Some processing operations which listeners apply to speech input are language-specific; these language-specific procedures, appropriate for listening to input in the native language, may be applied to foreign-language input irrespective of whether they remain appropriate.
  • Cutler, A. (1980). La leçon des lapsus. La Recherche, 11(112), 686-692.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A. (1990). Exploiting prosodic probabilities in speech segmentation. In G. Altmann (Ed.), Cognitive models of speech processing: Psycholinguistic and computational perspectives (pp. 105-121). Cambridge, MA: MIT Press.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
  • Cutler, A. (1980). Productivity in word formation. In J. Kreiman, & A. E. Ojeda (Eds.), Papers from the Sixteenth Regional Meeting, Chicago Linguistic Society (pp. 45-51). Chicago, Ill.: CLS.
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A. (1975). Sentence stress and sentence comprehension. PhD Thesis, University of Texas, Austin.
  • Cutler, A., & Scott, D. R. (1990). Speaker sex and perceived apportionment of talk. Applied Psycholinguistics, 11, 253-272. doi:10.1017/S0142716400008882.

    Abstract

    It is a widely held belief that women talk more than men; but experimental evidence has suggested that this belief is mistaken. The present study investigated whether listener bias contributes to this mistake. Dialogues were recorded in mixed-sex and single-sex versions, and male and female listeners judged the proportions of talk contributed to the dialogues by each participant. Female contributions to mixed-sex dialogues were rated as greater than male contributions by both male and female listeners. Female contributions were more likely to be overestimated when they were speaking a dialogue part perceived as probably female than when they were speaking a dialogue part perceived as probably male. It is suggested that the misestimates are due to a complex of factors that may involve both perceptual effects such as misjudgment of rates of speech and sociological effects such as attitudes to social roles and perception of power relations.
  • Cutler, A. (1987). Speaking for listening. In A. Allport, D. MacKay, W. Prinz, & E. Scheerer (Eds.), Language perception and production: Relationships between listening, speaking, reading and writing (pp. 23-40). London: Academic Press.

    Abstract

    Speech production is constrained at all levels by the demands of speech perception. The speaker's primary aim is successful communication, and to this end semantic, syntactic and lexical choices are directed by the needs of the listener. Even at the articulatory level, some aspects of production appear to be perceptually constrained, for example the blocking of phonological distortions under certain conditions. An apparent exception to this pattern is word boundary information, which ought to be extremely useful to listeners, but which is not reliably coded in speech. It is argued that the solution to this apparent problem lies in rethinking the concept of the boundary of the lexical access unit. Speech rhythm provides clear information about the location of stressed syllables, and listeners do make use of this information. If stressed syllables can serve as the determinants of word lexical access codes, then once again speakers are providing precisely the necessary form of speech information to facilitate perception.
  • Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.

    Abstract

    Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.
  • Cutler, A. (1990). Syllabic lengthening as a word boundary cue. In R. Seidl (Ed.), Proceedings of the 3rd Australian International Conference on Speech Science and Technology (pp. 324-328). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Bisyllabic sequences which could be interpreted as one word or two were produced in sentence contexts by a trained speaker, and syllabic durations measured. Listeners judged whether the bisyllables, excised from context, were one word or two. The proportion of two-word choices correlated positively with measured duration, but only for bisyllables stressed on the second syllable. The results may suggest a limit for listener sensitivity to syllabic lengthening as a word boundary cue.
  • Cutler, A. (1980). Syllable omission errors and isochrony. In H. W. Dechet, & M. Raupach (Eds.), Temporal variables in speech: studies in honour of Frieda Goldman-Eisler (pp. 183-190). The Hague: Mouton.
  • Cutler, A., & Young, D. (1994). Rhythmic structure of word blends in English. In Proceedings of the Third International Conference on Spoken Language Processing (pp. 1407-1410). Kobe: Acoustical Society of Japan.

    Abstract

    Word blends combine fragments from two words, either in speech errors or when a new word is created. Previous work has demonstrated that in Japanese, such blends preserve moraic structure; in English they do not. A similar effect of moraic structure is observed in perceptual research on segmentation of continuous speech in Japanese; English listeners, by contrast, exploit stress units in segmentation, suggesting that a general rhythmic constraint may underlie both findings. The present study examined whether mis parallel would also hold for word blends. In spontaneous English polysyllabic blends, the source words were significantly more likely to be split before a strong than before a weak (unstressed) syllable, i.e. to be split at a stress unit boundary. In an experiment in which listeners were asked to identify the source words of blends, significantly more correct detections resulted when splits had been made before strong syllables. Word blending, like speech segmentation, appears to be constrained by language rhythm.
  • Cutler, A. (1994). The perception of rhythm in language. Cognition, 50, 79-81. doi:10.1016/0010-0277(94)90021-3.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A., & Isard, S. D. (1980). The production of prosody. In B. Butterworth (Ed.), Language production (pp. 245-269). London: Academic Press.
  • Cutler, A., & Carter, D. (1987). The prosodic structure of initial syllables in English. In J. Laver, & M. Jack (Eds.), Proceedings of the European Conference on Speech Technology: Vol. 1 (pp. 207-210). Edinburgh: IEE.
  • Cutler, A., Norris, D., & Van Ooijen, B. (1990). Vowels as phoneme detection targets. In Proceedings of the First International Conference on Spoken Language Processing (pp. 581-584).

    Abstract

    Phoneme detection is a psycholinguistic task in which listeners' response time to detect the presence of a pre-specified phoneme target is measured. Typically, detection tasks have used consonant targets. This paper reports two experiments in which subjects responded to vowels as phoneme detection targets. In the first experiment, targets occurred in real words, in the second in nonsense words. Response times were long by comparison with consonantal targets. Targets in initial syllables were responded to much more slowly than targets in second syllables. Strong vowels were responded to faster than reduced vowels in real words but not in nonwords. These results suggest that the process of phoneme detection produces different results for vowels and for consonants. We discuss possible explanations for this difference, in particular the possibility of language-specificity.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cutler, A., McQueen, J. M., Baayen, R. H., & Drexler, H. (1994). Words within words in a real-speech corpus. In R. Togneri (Ed.), Proceedings of the 5th Australian International Conference on Speech Science and Technology: Vol. 1 (pp. 362-367). Canberra: Australian Speech Science and Technology Association.

    Abstract

    In a 50,000-word corpus of spoken British English the occurrence of words embedded within other words is reported. Within-word embedding in this real speech sample is common, and analogous to the extent of embedding observed in the vocabulary. Imposition of a syllable boundary matching constraint reduces but by no means eliminates spurious embedding. Embedded words are most likely to overlap with the beginning of matrix words, and thus may pose serious problems for speech recognisers.
  • Cutler, A., & Fay, D. (1975). You have a Dictionary in your Head, not a Thesaurus. Texas Linguistic Forum, 1, 27-40.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • D'Avis, F.-J., & Gretsch, P. (1994). Variations on "Variation": On the Acquisition of Complementizers in German. In R. Tracy, & E. Lattey (Eds.), How Tolerant is Universal Grammar? (pp. 59-109). Tübingen, Germany: Max-Niemeyer-Verlag.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dittmar, N., Reich, A., Skiba, R., Schumacher, M., & Terborg, H. (1990). Die Erlernung modaler Konzepte des Deutschen durch erwachsene polnische Migranten: Eine empirische Längsschnittstudie. Informationen Deutsch als Fremdsprache: Info DaF, 17(2), 125-172.
  • Dittmar, N., & Klein, W. (1975). Untersuchungen zum Pidgin-Deutsch spanischer und italienischer Arbeiter in der Bundesrepublik: Ein Arbeitsbericht. In A. Wierlacher (Ed.), Jahrbuch Deutsch als Fremdsprache (pp. 170-194). Heidelberg: Groos.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Eibl-Eibesfeldt, I., & Senft, G. (1987). Studienbrief Rituelle Kommunikation. Hagen: FernUniversität Gesamthochschule Hagen, Fachbereich Erziehungs- und Sozialwissenschaften, Soziologie, Kommunikation - Wissen - Kultur.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1987). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. Publikation zu Wissenschaftlichen Filmen, Sektion Ethnologie, 25, 1-15.
  • Eisenbeiß, S., Bartke, S., Weyerts, H., & Clahsen, H. (1994). Elizitationsverfahren in der Spracherwerbsforschung: Nominalphrasen, Kasus, Plural, Partizipien. Theorie des Lexikons, 57.
  • Eisenbeiss, S. (2000). The acquisition of Determiner Phrase in German child language. In M.-A. Friedemann, & L. Rizzi (Eds.), The Acquisition of Syntax (pp. 26-62). Harlow, UK: Pearson Education Ltd.
  • Enfield, N. J. (2000). On linguocentrism. In M. Pütz, & M. H. Verspoor (Eds.), Explorations in linguistic relativity (pp. 125-157). Amsterdam: Benjamins.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Enfield, N. J., & Evans, G. (2000). Transcription as standardisation: The problem of Tai languages. In S. Burusphat (Ed.), Proceedings: the International Conference on Tai Studies, July 29-31, 1998, (pp. 201-212). Bangkok, Thailand: Institute of Language and Culture for Rural Development, Mahidol University.
  • Fisher, S. E., Black, G. C. M., Lloyd, S. E., Wrong, O. M., Thakker, R. V., & Craig, I. W. (1994). Isolation and partial characterization of a chloride channel gene which is expressed in kidney and is a candidate for Dent's disease (an X-linked hereditary nephrolithiasis). Human Molecular Genetics, 3, 2053-2059.

    Abstract

    Dent's disease, an X-linked renal tubular disorder, is a form of Fanconi syndrome which is characterized by proteinuria, hypercalciuria, nephrocalcinosis, kidney stones and renal failure. Previous studies localised the gene responsible to Xp11.22, within a microdeletion involving the hypervariable locus DXS255. Further analysis using new probes which flank this locus indicate that the deletion is less than 515 kb. A 185 kb YAC containing DXS255 was used to screen a cDNA library from adult kidney in order to isolate coding sequences falling within the deleted region which may be implicated in the disease aetiology. We identified two clones which are evolutionarily conserved, and detect a 9.5 kb transcript which is expressed predominantly in the kidney. Sequence analysis of 780 bp of ORF from the clones suggests that the identified gene, termed hCIC-K2, encodes a new member of the CIC family of voltage-gated chloride channels. Genomic fragments detected by the cDNA clones are completely absent in patients who have an associated microdeletion. On the basis of the expression pattern, proposed function and deletion mapping, hCIC-K2 is a strong candidate for Dent's disease.
  • Flores d'Arcais, G., & Lahiri, A. (1987). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.8 1987. Nijmegen: MPI for Psycholinguistics.
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.
  • Friederici, A., & Levelt, W. J. M. (1987). Resolving perceptual conflicts: The cognitive mechanism of spatial orientation. Aviation, Space, and Environmental Medicine, 58(9), A164-A169.
  • Friederici, A., & Levelt, W. J. M. (1987). Spatial description in microgravity: Aspects of cognitive adaptation. In P. R. Sahm, R. Jansen, & M. Keller (Eds.), Proceedings of the Norderney Symposium on Scientific Results of the German Spacelab Mission D1 (pp. 518-524). Köln, Germany: Wissenschaftliche Projektführung DI c/o DFVLR.
  • Friederici, A., & Levelt, W. J. M. (1990). Spatial reference in weightlessness: Perceptual factors and mental representations. Perception and Psychophysics, 47, 253-266.

    Abstract

    The role of gravity in spatial coordinate assignment and the mental representation of space were studiedin three experiments, varying different perceptual cues systematically: the retinal, the visual background, the vestibular, and proprioceptive information. Verbal descriptions of visually presented arrays were required under different head positions (straight/tilt) and under different gravitational conditions (gravity present/gravity absent). The results of two experiments conducted with 2 subjects who participated in a space flight revealed that subjects are able to adequately assign positions in space in the absence of gravitational information, and that they do this by using their head—retinal coordinates as primary references. This indicates that they cognitively adapted to the perceptually new situation.The findings from a third experiment conducted with a larger group of subjects under a condition in which the gravitational information was present but irrelevant to the task being solved (subjects were in a-horizontal 8upine-position) show that subjects, in general, are flexible in using cues other than gravitational ones as references when the latter cannot serve as a referential system. These findings, together with the observation that consistent spatial assignment is possible evenimmediately after first exposure to the perceptually totally novel situation of weightlessness, seem to suggest that the mental representation of space, onto which given perceptual information is mapped, is independent of a particular percept.
  • Friederici, A., & Levelt, W. J. M. (1987). Sprache. In K. Immelmann, K. Scherer, & C. Vogel (Eds.), Funkkolleg Psychobiologie (pp. 58-87). Weinheim: Beltz.
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.

    Abstract

    Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
  • Hagoort, P. (2000). De toekomstige eeuw der cognitieve neurowetenschap [inaugural lecture]. Katholieke Universiteit Nijmegen.

    Abstract

    Rede uitgesproken op 12 mei 2000 bij de aanvaarding van het ambt van hoogleraar in de neuropsychologie aan de Faculteit Sociale Wetenschappen KUN.

Share this page