Publications

Displaying 101 - 200 of 915
  • Brown, P. (1990). Gender, politeness and confrontation in Tenejapa. Discourse Processes, 13, 123-141.

    Abstract

    This paper compares some features of the interactional details of a Tenejapan (Mexico) court case with the features of social interaction characteristic of ordinary, casual encounters in this society. It is suggested that courtroom behaviour in Tenejapa is a very special form of interaction, in a context that uniquely allows for direct face-to-face confrontation in a society where a premium is placed on interactional restraint. Courtroom speech in Tenejapa directly contraverts norms and conventions that operate in other contexts, and women’s conventionalized polite ‘ways of putting things’ are here used sarcastically to be impolite. Thus, in this society, gender operates across contexts as a ‘master status’, but with gender meanings transformed in the different contexts: forms associated with superficial cooperation and agreement being used to emphasize lack of cooperation, disagreement, and hostility. The implications of this Tenejapan phenomenon for our understanding of the nature of relations between language and gender are explored.
  • Brown, P. (2003). Multimodal multiperson interaction with infants aged 9 to 15 months. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 22-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877610.

    Abstract

    Interaction, for all that it has an ethological base, is culturally constituted, and how new social members are enculturated into the interactional practices of the society is of critical interest to our understanding of interaction – how much is learned, how variable is it across cultures – as well as to our understanding of the role of culture in children’s social-cognitive development. The goal of this task is to document the nature of caregiver infant interaction in different cultures, especially during the critical age of 9-15 months when children come to have an understanding of others’ intentions. This is of interest to all students of interaction; it does not require specialist knowledge of children.
  • Brown, P. (1998). La identificación de las raíces verbales en Tzeltal (Maya): Cómo lo hacen los niños? Función, 17-18, 121-146.

    Abstract

    This is a Spanish translation of Brown 1997.
  • Brown, P. (1998). How and why are women more polite: Some evidence from a Mayan community. In J. Coates (Ed.), Language and gender (pp. 81-99). Oxford: Blackwell.
  • Brown, P. (1991). Sind Frauen höflicher? Befunde aus einer Maya-Gemeinde. In S. Günther, & H. Kotthoff (Eds.), Von fremden Stimmen: Weibliches und männliches Sprechen im Kulturvergleich. Frankfurt am Main: Suhrkamp.

    Abstract

    This is a German translation of Brown 1980, How and why are women more polite: Some evidence from a Mayan community.
  • Brown, P. (1995). Politeness strategies and the attribution of intentions: The case of Tzeltal irony. In E. Goody (Ed.), Social intelligence and interaction (pp. 153-174). Cambridge: Cambridge University Press.

    Abstract

    In this paper I take up the idea that human thinking is systematically biased in the direction of interactive thinking (E. Goody's anticipatory interactive planning), that is, that humans are peculiarly good at, and inordinately prone to, attributing intentions and goals to one other (as well as to non-humans), and that they routinely orient to presumptions about each other's intentions in what they say and do. I explore the implications of that idea for an understanding of politeness in interaction, taking as a starting point the Brown and Levinson (1987) model of politeness, which assumes interactive thinking, a notion implicit in the formulation of politeness as strategic orientation to face. Drawing on an analysis of the phenomenon of conventionalized ‘irony’ in Tzeltal, I emphasize that politeness does not inhere in linguistic form per se but is a matter of conveying a polite intention, and argue that Tzeltal irony provides a prime example of one way in which humans' highly-developed intellectual machinery for inferring alter's intentions is put to the service of social relationships.
  • Brown, P., & Levinson, S. C. (1998). Politeness, introduction to the reissue: A review of recent work. In A. Kasher (Ed.), Pragmatics: Vol. 6 Grammar, psychology and sociology (pp. 488-554). London: Routledge.

    Abstract

    This article is a reprint of chapter 1, the introduction to Brown and Levinson, 1987, Politeness: Some universals in language usage (Cambridge University Press).
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Butterfield, S., & Cutler, A. (1990). Intonational cues to word boundaries in clear speech? In Proceedings of the Institute of Acoustics: Vol 12, part 10 (pp. 87-94). St. Albans, Herts.: Institute of Acoustics.
  • Byun, K.-S. (2007). Becoming friends with Korean Sign Language. Cheonan: Chungnam Association of the Deaf.
  • Cablitz, G., Ringersma, J., & Kemps-Snijders, M. (2007). Visualizing endangered indigenous languages of French Polynesia with LEXUS. In Proceedings of the 11th International Conference Information Visualization (IV07) (pp. 409-414). IEEE Computer Society.

    Abstract

    This paper reports on the first results of the DOBES project ‘Towards a multimedia dictionary of the Marquesan and Tuamotuan languages of French Polynesia’. Within the framework of this project we are building a digital multimedia encyclopedic lexicon of the endangered Marquesan and Tuamotuan languages using a new tool, LEXUS. LEXUS is a web-based lexicon tool, targeted at linguists involved in language documentation. LEXUS offers the possibility to visualize language. It provides functionalities to include audio, video and still images to the lexical entries of the dictionary, as well as relational linking for the creation of a semantic network knowledge base. Further activities aim at the development of (1) an improved user interface in close cooperation with the speech community and (2) a collaborative workspace functionality which will allow the speech community to actively participate in the creation of lexica.
  • Cameron-Faulkner, T., & Kidd, E. (2007). I'm are what I'm are: The acquisition of first-person singular present BE. Cognitive Linguistics, 18(1), 1-22. doi:10.1515/COG.2007.001.

    Abstract

    The present study investigates the development of am in the speech of one English-speaking child, Scarlett (aged 4;6–5;6). We show that am is infrequent in the speech addressed to children; the acquisition of this form of BE presents a unique insight into the processes underlying language development because children have little evidence regarding its correct use. Scarlett produced a pervasive error where she overextended are to first-person singular contexts where am was required (e.g., I'm are trying, When are I'm finished?). Am gradually emerged in her speech on what appears to be a construction-specific basis. The findings of the study are used in support of a usage-based, constructivisit approach to language development.
  • Caramazza, A., Miozzo, M., Costa, A., Schiller, N. O., & Alario, F.-X. (2003). Etude comparee de la production des determinants dans differentes langues. In E. Dupoux (Ed.), Les Langages du cerveau: Textes en l'honneur de Jacques Mehler (pp. 213-229). Paris: Odile Jacob.
  • Carota, F. (2007). Collaborative use of contrastive markers Contextual and co-textual implications. In A. Fetzer (Ed.), Context and Appropriateness: Micro meets macro (pp. 235-260). Amsterdam: Benjamins.

    Abstract

    The study presented in this paper examines the context-dependence and
    dialogue functions of the contrastive markers of Italian ma (but),
    invece (instead), mentre (while) and per (nevertheless) within
    task-oriented dialogues.
    Corpus data evidence their sensitivity to a acognitive interpersonal
    context, conceived as a common ground. Such a cognitive state - shared
    by co-participants through the coordinative process of grounding -
    interacts with the global dialogue structure, which is cognitively
    shaped by ``meta-negotiating{''} and grounding the dialogue topic.
    Locally, the relation between the current dialogue structural units and
    the global dialogue topic is said to be specified by information
    structure, in particular intra-utterance themes.
    It is argued that contrastive markers re-orient the co-participants'
    cognitive states towards grounding ungrounded topical aspects to be
    meta-negotiated. They offer a collaborative context-updating strategy,
    tracking the status of common ground during dialogue topic management.
  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121, 1053-1063. doi:10.1093/brain/121.6.1053.

    Abstract

    Learning a specific skill during childhood may partly determine the functional organization of the adult brain. This hypothesis led us to study oral language processing in illiterate subjects who, for social reasons, had never entered school and had no knowledge of reading or writing. In a brain activation study using PET and statistical parametric mapping, we compared word and pseudoword repetition in literate and illiterate subjects. Our study confirms behavioural evidence of different phonological processing in illiterate subjects. During repetition of real words, the two groups performed similarly and activated similar areas of the brain. In contrast, illiterate subjects had more difficulty repeating pseudowords correctly and did not activate the same neural structures as literates. These results are consistent with the hypothesis that learning the written form of language (orthography) interacts with the function of oral language. Our results indicate that learning to read and write during childhood influences the functional organization of the adult human brain.
  • Chen, A., Den Os, E., & De Ruiter, J. P. (2007). Pitch accent type matters for online processing of information status: Evidence from natural and synthetic speech. The Linguistic Review, 24(2), 317-344. doi:10.1515/TLR.2007.012.

    Abstract

    Adopting an eyetracking paradigm, we investigated the role of H*L, L*HL, L*H, H*LH, and deaccentuation at the intonational phrase-final position in online processing of information status in British English in natural speech. The role of H*L, L*H and deaccentuation was also examined in diphonesynthetic speech. It was found that H*L and L*HL create a strong bias towards newness, whereas L*H, like deaccentuation, creates a strong bias towards givenness. In synthetic speech, the same effect was found for H*L, L*H and deaccentuation, but it was delayed. The delay may not be caused entirely by the difference in the segmental quality between synthetic and natural speech. The pitch accent H*LH, however, appears to bias participants' interpretation to the target word, independent of its information status. This finding was explained in the light of the effect of durational information at the segmental level on word recognition.
  • Chen, X. S., Rozhdestvensky, T. S., Collins, L. J., Schmitz, J., & Penny, D. (2007). Combined experimental and computational approach to identify non-protein-coding RNAs in the deep-branching eukaryote Giardia intestinalis. Nucleic Acids Research, 35, 4619-4628. doi:10.1093/nar/gkm474.

    Abstract

    Non-protein-coding RNAs represent a large proportion of transcribed sequences in eukaryotes. These RNAs often function in large RNA–protein complexes, which are catalysts in various RNA-processing pathways. As RNA processing has become an increasingly important area of research, numerous non-messenger RNAs have been uncovered in all the model eukaryotic organisms. However, knowledge on RNA processing in deep-branching eukaryotes is still limited. This study focuses on the identification of non-protein-coding RNAs from the diplomonad parasite Giardia intestinalis, showing that a combined experimental and computational search strategy is a fast method of screening reduced or compact genomes. The analysis of our Giardia cDNA library has uncovered 31 novel candidates, including C/D-box and H/ACA box snoRNAs, as well as an unusual transcript of RNase P, and double-stranded RNAs. Subsequent computational analysis has revealed additional putative C/D-box snoRNAs. Our results will lead towards a future understanding of RNA metabolism in the deep-branching eukaryote Giardia, as more ncRNAs are characterized.
  • Chen, J. (2007). 'He cut-break the rope': Encoding and categorizing cutting and breaking events in Mandarin. Cognitive Linguistics, 18(2), 273-285. doi:10.1515/COG.2007.015.

    Abstract

    Abstract Mandarin categorizes cutting and breaking events on the basis of fine semantic distinctions in the causal action and the caused result. I demonstrate the semantics of Mandarin C&B verbs from the perspective of event encoding and categorization as well as argument structure alternations. Three semantically different types of predicates can be identified: verbs denoting the C&B action subevent, verbs encoding the C&B result subevent, and resultative verb compounds (RVC) that encode both the action and the result subevents. The first verb of an RVC is basically dyadic, whereas the second is monadic. RVCs as a whole are also basically dyadic, and do not undergo detransitivization.
  • Chen, A., & Fikkert, P. (2007). Intonation of early two-word utterances in Dutch. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 315-320). Dudweiler: Pirrot.

    Abstract

    We analysed intonation contours of two-word utterances from three monolingual Dutch children aged between 1;4 and 2;1 in the autosegmentalmetrical framework. Our data show that children have mastered the inventory of the boundary tones and nuclear pitch accent types (except for L*HL and L*!HL) at the 160-word level, and the set of nondownstepped pre-nuclear pitch accents (except for L*) at the 230-word level, contra previous claims on the mastery of adult-like intonation contours before or at the onset of first words. Further, there is evidence that intonational development is correlated with an increase in vocabulary size. Moreover, we found that children show a preference for falling contours, as predicted on the basis of universal production mechanisms. In addition, the utterances are mostly spoken with both words accented independent of semantic relations expressed and information status of each word across developmental stages, contra prior work. Our study suggests a number of topics for further research.
  • Chen, A. (2007). Intonational realisation of topic and focus by Dutch-acquiring 4- to 5-year-olds. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1553-1556). Dudweiler: Pirott.

    Abstract

    This study examined how Dutch-acquiring 4- to 5-year-olds use different pitch accent types and deaccentuation to mark topic and focus at the sentence level and how they differ from adults. The topic and focus were non-contrastive and realised as full noun phrases. It was found that children realise topic and focus similarly frequently with H*L, whereas adults use H*L noticeably more frequently in focus than in topic in sentence-initial position and nearly only in focus in sentence-final position. Further, children frequently realise the topic with an accent, whereas adults mostly deaccent the sentence-final topic and use H*L and H* to realise the sentence-initial topic because of rhythmic motivation. These results show that 4- and 5-year-olds have not acquired H*L as the typical focus accent and deaccentuation as the typical topic intonation yet. Possibly, frequent use of H*L in sentence-initial topic in adult Dutch has made it difficult to extract the functions of H*L and deaccentuation from the input.
  • Chen, A. (2007). Language-specificity in the perception of continuation intonation. In C. Gussenhoven, & T. Riad (Eds.), Tones and tunes II: Phonetic and behavioural studies in word and sentence prosody (pp. 107-142). Berlin: Mouton de Gruyter.

    Abstract

    This paper addressed the question of how British English, German and Dutch listeners differ in their perception of continuation intonation both at the phonological level (Experiment 1) and at the level of phonetic implementation (Experiment 2). In Experiment 1, preference scores of pitch contours to signal continuation at the clause-boundary were obtained from these listener groups. It was found that among contours with H%, British English listeners had a strong preference for H*L H%, as predicted. Unexpectedly, British English listeners rated H* H% noticeably more favourably than L*H H%; Dutch listeners largely rated H* H% more favourably than H*L H% and L*H H%; German listeners rated these contours similarly and seemed to have a slight preference for H*L H%. In Experiment 2, the degree to which a final rise was perceived to express continuation was established for each listener group in a made-up language. It was found that although all listener groups associated a higher end pitch with a higher degree of continuation likelihood, the perceived meaning difference for a given interval of end pitch heights varied with the contour shape of the utterance final syllable. When it was comparable to H* H%, British English and Dutch listeners perceived a larger meaning difference than German listeners; when it was comparable to H*L H%, British English listeners perceived a larger difference than German and Dutch listeners. This shows that language-specificity in continuation intonation at the phonological level affects the perception of continuation intonation at the phonetic level.
  • Chen, A. (2003). Language dependence in continuation intonation. In M. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS.) (pp. 1069-1072). Rundle Mall, SA, Austr.: Causal Productions Pty.
  • Chen, A. (2003). Reaction time as an indicator to discrete intonational contrasts in English. In Proceedings of Eurospeech 2003 (pp. 97-100).

    Abstract

    This paper reports a perceptual study using a semantically motivated identification task in which we investigated the nature of two pairs of intonational contrasts in English: (1) normal High accent vs. emphatic High accent; (2) early peak alignment vs. late peak alignment. Unlike previous inquiries, the present study employs an on-line method using the Reaction Time measurement, in addition to the measurement of response frequencies. Regarding the peak height continuum, the mean RTs are shortest for within-category identification but longest for across-category identification. As for the peak alignment contrast, no identification boundary emerges and the mean RTs only reflect a difference between peaks aligned with the vowel onset and peaks aligned elsewhere. We conclude that the peak height contrast is discrete but the previously claimed discreteness of the peak alignment contrast is not borne out.
  • Cho, T., McQueen, J. M., & Cox, E. A. (2007). Prosodically driven phonetic detail in speech processing: The case of domain-initial strengthening in English. Journal of Phonetics, 35(2), 210-243. doi:10.1016/j.wocn.2006.03.003.

    Abstract

    We explore the role of the acoustic consequences of domain-initial strengthening in spoken-word recognition. In two cross-modal identity-priming experiments, listeners heard sentences and made lexical decisions to visual targets, presented at the onset of the second word in two-word sequences containing lexical ambiguities (e.g., bus tickets, with the competitor bust). These sequences contained Intonational Phrase (IP) or Prosodic Word (Wd) boundaries, and the second word's initial Consonant and Vowel (CV, e.g., [tI]) was spliced from another token of the sequence in IP- or Wd-initial position. Acoustic analyses showed that IP-initial consonants were articulated more strongly than Wd-initial consonants. In Experiment 1, related targets were post-boundary words (e.g., tickets). No strengthening effect was observed (i.e., identity priming effects did not vary across splicing conditions). In Experiment 2, related targets were pre-boundary words (e.g., bus). There was a strengthening effect (stronger priming when the post-boundary CVs were spliced from IP-initial than from Wd-initial position), but only in Wd-boundary contexts. These were the conditions where phonetic detail associated with domain-initial strengthening could assist listeners most in lexical disambiguation. We discuss how speakers may strengthen domain-initial segments during production and how listeners may use the resulting acoustic correlates of prosodic strengthening during word recognition.
  • Cho, T. (2003). Lexical stress, phrasal accent and prosodic boundaries in the realization of domain-initial stops in Dutch. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhs 2003) (pp. 2657-2660). Adelaide: Causal Productions.

    Abstract

    This study examines the effects of prosodic boundaries, lexical stress, and phrasal accent on the acoustic realization of stops (/t, d/) in Dutch, with special attention paid to language-specificity in the phonetics-prosody interface. The results obtained from various acoustic measures show systematic phonetic variations in the production of /t d/ as a function of prosodic position, which may be interpreted as being due to prosodicallyconditioned articulatory strengthening. Shorter VOTs were found for the voiceless stop /t/ in prosodically stronger locations (as opposed to longer VOTs in this position in English). The results suggest that prosodically-driven phonetic realization is bounded by a language-specific phonological feature system.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Christoffels, I. K., Formisano, E., & Schiller, N. O. (2007). The neural correlates of verbal feedback processing: An fMRI study employing overt speech. Human Brain Mapping, 28(9), 868-879. doi:10.1002/hbm.20315.

    Abstract

    Speakers use external auditory feedback to monitor their own speech. Feedback distortion has been found to increase activity in the superior temporal areas. Using fMRI, the present study investigates the neural correlates of processing verbal feedback without distortion. In a blocked design, the following conditions were presented: (1) overt picture-naming, (2) overt picture-naming while pink noise was presented to mask external feedback, (3) covert picture-naming, (4) listening to the picture names (previously recorded from participants' own voices), and (5) listening to pink noise. The results show that auditory feedback processing involves a network of different areas related to general performance monitoring and speech-motor control. These include the cingulate cortex and the bilateral insula, supplementary motor area, bilateral motor areas, cerebellum, thalamus and basal ganglia. Our findings suggest that the anterior cingulate cortex, which is often implicated in error-processing and conflict-monitoring, is also engaged in ongoing speech monitoring. Furthermore, in the superior temporal gyrus, we found a reduced response to speaking under normal feedback conditions. This finding is interpreted in the framework of a forward model according to which, during speech production, the sensory consequence of the speech-motor act is predicted to attenuate the sensitivity of the auditory cortex. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc.
  • Christoffels, I. K., Firk, C., & Schiller, N. O. (2007). Bilingual language control: An event-related brain potential study. Brain Research, 1147, 192-208. doi:10.1016/j.brainres.2007.01.137.

    Abstract

    This study addressed how bilingual speakers switch between their first and second language when speaking. Event-related brain potentials (ERPs) and naming latencies were measured while unbalanced German (L1)-Dutch (L2) speakers performed a picture-naming task. Participants named pictures either in their L1 or in their L2 (blocked language conditions), or participants switched between their first and second language unpredictably (mixed language condition). Furthermore, form similarity between translation equivalents (cognate status) was manipulated. A cognate facilitation effect was found for L1 and L2 indicating phonological activation of the non-response language in blocked and mixed language conditions. The ERP data also revealed small but reliable effects of cognate status. Language switching resulted in equal switching costs for both languages and was associated with a modulation in the ERP waveforms (time windows 275-375 ms and 375-475 ms). Mixed language context affected especially the L1, both in ERPs and in latencies, which became slower in L1 than L2. It is suggested that sustained and transient components of language control should be distinguished. Results are discussed in relation to current theories of bilingual language processing.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Chwilla, D., Brown, C. M., & Hagoort, P. (1995). The N400 as a function of the level of processing. Psychophysiology, 32, 274-285. doi:10.1111/j.1469-8986.1995.tb02956.x.

    Abstract

    In a semantic priming paradigm, the effects of different levels of processing on the N400 were assessed by changing the task demands. In the lexical decision task, subjects had to discriminate between words and nonwords and in the physical task, subjects had to discriminate between uppercase and lowercase letters. The proportion of related versus unrelated word pairs differed between conditions. A lexicality test on reaction times demonstrated that the physical task was performed nonlexically. Moreover, a semantic priming reaction time effect was obtained only in the lexical decision task. The level of processing clearly affected the event-related potentials. An N400 priming effect was only observed in the lexical decision task. In contrast, in the physical task a P300 effect was observed for either related or unrelated targets, depending on their frequency of occurrence. Taken together, the results indicate that an N400 priming effect is only evoked when the task performance induces the semantic aspects of words to become part of an episodic trace of the stimulus event.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Cutler, A., & Butterfield, S. (2003). Rhythmic cues to speech segmentation: Evidence from juncture misperception. In J. Field (Ed.), Psycholinguistics: A resource book for students. (pp. 185-189). London: Routledge.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A. (2003). The perception of speech: Psycholinguistic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 154-157). Oxford: Oxford University Press.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A., & Fear, B. D. (1991). Categoricality in acceptability judgements for strong versus weak vowels. In J. Llisterri (Ed.), Proceedings of the ESCA Workshop on Phonetics and Phonology of Speaking Styles (pp. 18.1-18.5). Barcelona, Catalonia: Universitat Autonoma de Barcelona.

    Abstract

    A distinction between strong and weak vowels can be drawn on the basis of vowel quality, of stress, or of both factors. An experiment was conducted in which sets of contextually matched word-intial vowels ranging from clearly strong to clearly weak were cross-spliced, and the naturalness of the resulting words was rated by listeners. The ratings showed that in general cross-spliced words were only significantly less acceptable than unspliced words when schwa was not involved; this supports a categorical distinction based on vowel quality.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A. (1985). Cross-language psycholinguistics. Linguistics, 23, 659-667.
  • Cutler, A., & Butterfield, S. (1990). Durational cues to word boundaries in clear speech. Speech Communication, 9, 485-495.

    Abstract

    One of a listener’s major tasks in understanding continuous speech in segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately clear speech. We found that speakers do indeed attempt to makr word boundaries; moreover, they differentiate between word boundaries in a way which suggest they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., McQueen, J. M., & Robinson, K. (1990). Elizabeth and John: Sound patterns of men’s and women’s names. Journal of Linguistics, 26, 471-482. doi:10.1017/S0022226700014754.
  • Cutler, A., Wales, R., Cooper, N., & Janssen, J. (2007). Dutch listeners' use of suprasegmental cues to English stress. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetics Sciences (ICPhS 2007) (pp. 1913-1916). Dudweiler: Pirrot.

    Abstract

    Dutch listeners outperform native listeners in identifying syllable stress in English. This is because lexical stress is more useful in recognition of spoken words of Dutch than of English, so that Dutch listeners pay greater attention to stress in general. We examined Dutch listeners’ use of the acoustic correlates of English stress. Primary- and secondary-stressed syllables differ significantly on acoustic measures, and some differences, in F0 especially, correlate with data of earlier listening experiments. The correlations found in the Dutch responses were not paralleled in data from native listeners. Thus the acoustic cues which distinguish English primary versus secondary stress are better exploited by Dutch than by native listeners.
  • Cutler, A., & Weber, A. (2007). Listening experience and phonetic-to-lexical mapping in L2. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 43-48). Dudweiler: Pirrot.

    Abstract

    In contrast to initial L1 vocabularies, which of necessity depend largely on heard exemplars, L2 vocabulary construction can draw on a variety of knowledge sources. This can lead to richer stored knowledge about the phonology of the L2 than the listener's prelexical phonetic processing capacity can support, and thus to mismatch between the level of detail required for accurate lexical mapping and the level of detail delivered by the prelexical processor. Experiments on spoken word recognition in L2 have shown that phonetic contrasts which are not reliably perceived are represented in the lexicon nonetheless. This lexical representation of contrast must be based on abstract knowledge, not on veridical representation of heard exemplars. New experiments confirm that provision of abstract knowledge (in the form of spelling) can induce lexical representation of a contrast which is not reliably perceived; but also that experience (in the form of frequency of occurrence) modulates the mismatch of phonetic and lexical processing. We conclude that a correct account of word recognition in L2 (as indeed in L1) requires consideration of both abstract and episodic information.
  • Cutler, A., Cooke, M., Garcia-Lecumberri, M. L., & Pasveer, D. (2007). L2 consonant identification in noise: Cross-language comparisons. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1585-1588). Adelaide: Causal productions.

    Abstract

    The difficulty of listening to speech in noise is exacerbated when the speech is in the listener’s L2 rather than L1. In this study, Spanish and Dutch users of English as an L2 identified American English consonants in a constant intervocalic context. Their performance was compared with that of L1 (British English) listeners, under quiet conditions and when the speech was masked by speech from another talker or by noise. Masking affected performance more for the Spanish listeners than for the L1 listeners, but not for the Dutch listeners, whose performance was worse than the L1 case to about the same degree in all conditions. There were, however,large differences in the pattern of results across individual consonants, which were consistent with differences in how consonants are identified in the respective L1s.
  • Cutler, A. (1990). From performance to phonology: Comments on Beckman and Edwards's paper. In J. Kingston, & M. Beckman (Eds.), Papers in laboratory phonology I: Between the grammar and physics of speech (pp. 208-214). Cambridge: Cambridge University Press.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A. (1990). Exploiting prosodic probabilities in speech segmentation. In G. Altmann (Ed.), Cognitive models of speech processing: Psycholinguistic and computational perspectives (pp. 105-121). Cambridge, MA: MIT Press.
  • Cutler, A., & Pearson, M. (1985). On the analysis of prosodic turn-taking cues. In C. Johns-Lewis (Ed.), Intonation in discourse (pp. 139-155). London: Croom Helm.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1985). Performance measures of lexical complexity. In G. Hoppenbrouwers, P. A. Seuren, & A. Weijters (Eds.), Meaning and the lexicon (pp. 75). Dordrecht: Foris.
  • Cutler, A., & Chen, H.-C. (1995). Phonological similarity effects in Cantonese word recognition. In K. Elenius, & P. Branderud (Eds.), Proceedings of the Thirteenth International Congress of Phonetic Sciences: Vol. 1 (pp. 106-109). Stockholm: Stockholm University.

    Abstract

    Two lexical decision experiments in Cantonese are described in which the recognition of spoken target words as a function of phonological similarity to a preceding prime is investigated. Phonological similaritv in first syllables produced inhibition, while similarity in second syllables led to facilitation. Differences between syllables in tonal and segmental structure had generally similar effects.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1991). Prosody in situations of communication: Salience and segmentation. In Proceedings of the Twelfth International Congress of Phonetic Sciences: Vol. 1 (pp. 264-270). Aix-en-Provence: Université de Provence, Service des publications.

    Abstract

    Speakers and listeners have a shared goal: to communicate. The processes of speech perception and of speech production interact in many ways under the constraints of this communicative goal; such interaction is as characteristic of prosodic processing as of the processing of other aspects of linguistic structure. Two of the major uses of prosodic information in situations of communication are to encode salience and segmentation, and these themes unite the contributions to the symposium introduced by the present review.
  • Cutler, A., & Ladd, D. R. (Eds.). (1983). Prosody: Models and measurements. Heidelberg: Springer.
  • Cutler, A. (1983). Semantics, syntax and sentence accent. In M. Van den Broecke, & A. Cohen (Eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (pp. 85-91). Dordrecht: Foris.
  • Cutler, A., & Scott, D. R. (1990). Speaker sex and perceived apportionment of talk. Applied Psycholinguistics, 11, 253-272. doi:10.1017/S0142716400008882.

    Abstract

    It is a widely held belief that women talk more than men; but experimental evidence has suggested that this belief is mistaken. The present study investigated whether listener bias contributes to this mistake. Dialogues were recorded in mixed-sex and single-sex versions, and male and female listeners judged the proportions of talk contributed to the dialogues by each participant. Female contributions to mixed-sex dialogues were rated as greater than male contributions by both male and female listeners. Female contributions were more likely to be overestimated when they were speaking a dialogue part perceived as probably female than when they were speaking a dialogue part perceived as probably male. It is suggested that the misestimates are due to a complex of factors that may involve both perceptual effects such as misjudgment of rates of speech and sociological effects such as attitudes to social roles and perception of power relations.
  • Cutler, A. (1983). Speakers’ conceptions of the functions of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 79-91). Heidelberg: Springer.
  • Cutler, A. (1995). Spoken word recognition and production. In J. L. Miller, & P. D. Eimas (Eds.), Speech, language and communication (pp. 97-136). New York: Academic Press.

    Abstract

    This chapter highlights that most language behavior consists of speaking and listening. The chapter also reveals differences and similarities between speaking and listening. The laboratory study of word production raises formidable problems; ensuring that a particular word is produced may subvert the spontaneous production process. Word production is investigated via slips and tip-of-the-tongue (TOT), primarily via instances of processing failure and via the technique of via the picture-naming task. The methodology of word production is explained in the chapter. The chapter also explains the phenomenon of interaction between various stages of word production and the process of speech recognition. In this context, it explores the difference between sound and meaning and examines whether or not the comparisons are appropriate between the processes of recognition and production of spoken words. It also describes the similarities and differences in the structure of the recognition and production systems. Finally, the chapter highlights the common issues in recognition and production research, which include the nuances of frequency of occurrence, morphological structure, and phonological structure.
  • Cutler, A. (1995). Spoken-word recognition. In G. Bloothooft, V. Hazan, D. Hubert, & J. Llisterri (Eds.), European studies in phonetics and speech communication (pp. 66-71). Utrecht: OTS.
  • Cutler, A. (1990). Syllabic lengthening as a word boundary cue. In R. Seidl (Ed.), Proceedings of the 3rd Australian International Conference on Speech Science and Technology (pp. 324-328). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Bisyllabic sequences which could be interpreted as one word or two were produced in sentence contexts by a trained speaker, and syllabic durations measured. Listeners judged whether the bisyllables, excised from context, were one word or two. The proportion of two-word choices correlated positively with measured duration, but only for bisyllables stressed on the second syllable. The results may suggest a limit for listener sensitivity to syllabic lengthening as a word boundary cue.
  • Cutler, A. (1995). The perception of rhythm in spoken and written language. In J. Mehler, & S. Franck (Eds.), Cognition on cognition (pp. 283-288). Cambridge, MA: MIT Press.
  • Cutler, A., & McQueen, J. M. (1995). The recognition of lexical units in speech. In B. De Gelder, & J. Morais (Eds.), Speech and reading: A comparative approach (pp. 33-47). Hove, UK: Erlbaum.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., Hawkins, J. A., & Gilligan, G. (1985). The suffixing preference: A processing explanation. Linguistics, 23, 723-758.
  • Cutler, A., Norris, D., & Van Ooijen, B. (1990). Vowels as phoneme detection targets. In Proceedings of the First International Conference on Spoken Language Processing (pp. 581-584).

    Abstract

    Phoneme detection is a psycholinguistic task in which listeners' response time to detect the presence of a pre-specified phoneme target is measured. Typically, detection tasks have used consonant targets. This paper reports two experiments in which subjects responded to vowels as phoneme detection targets. In the first experiment, targets occurred in real words, in the second in nonsense words. Response times were long by comparison with consonantal targets. Targets in initial syllables were responded to much more slowly than targets in second syllables. Strong vowels were responded to faster than reduced vowels in real words but not in nonwords. These results suggest that the process of phoneme detection produces different results for vowels and for consonants. We discuss possible explanations for this difference, in particular the possibility of language-specificity.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A. (1995). Universal and Language-Specific in the Development of Speech. Biology International, (Special Issue 33).
  • Dahan, D., & Gaskell, M. G. (2007). The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition. Journal of Memory and Language, 57(4), 483-501. doi:10.1016/j.jml.2007.01.001.

    Abstract

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants’ responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
  • Damian, M. F., & Abdel Rahman, R. (2003). Semantic priming in the naming of objects and famous faces. British Journal of Psychology, 94(4), 517-527.

    Abstract

    Researchers interested in face processing have recently debated whether access to the name of a known person occurs in parallel with retrieval of semantic-biographical codes, rather than in a sequential fashion. Recently, Schweinberger, Burton, and Kelly (2001) took a failure to obtain a semantic context effect in a manual syllable judgment task on names of famous faces as support for this position. In two experiments, we compared the effects of visually presented categorically related prime words with either objects (e.g. prime: animal; target: dog) or faces of celebrities (e.g. prime: actor; target: Bruce Willis) as targets. Targets were either manually categorized with regard to the number of syllables (as in Schweinberger et al.), or they were overtly named. For neither objects nor faces was semantic priming obtained in syllable decisions; crucially, however, priming was obtained when objects and faces were overtly named. These results suggest that both face and object naming are susceptible to semantic context effects
  • Danziger, E. (1995). Intransitive predicate form class survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 46-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004298.

    Abstract

    Different linguistic structures allow us to highlight distinct aspects of a situation. The aim of this survey is to investigate similarities and differences in the expression of situations or events as “stative” (maintaining a state), “inchoative” (adopting a state) and “agentive” (causing something to be in a state). The questionnaire focuses on the encoding of stative, inchoative and agentive possibilities for the translation equivalents of a set of English verbs.
  • Danziger, E. (1995). Posture verb survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 33-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004235.

    Abstract

    Expressions of human activities and states are a rich area for cross-linguistic comparison. Some languages of the world treat human posture verbs (e.g., sit, lie, kneel) as a special class of predicates, with distinct formal properties. This survey examines lexical, semantic and grammatical patterns for posture verbs, with special reference to contrasts between “stative” (maintaining a posture), “inchoative” (adopting a posture), and “agentive” (causing something to adopt a posture) constructions. The enquiry is thematically linked to the more general questionnaire 'Intransitive Predicate Form Class Survey'.
  • Davidson, D. J., & Indefrey, P. (2007). An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Research, 1158, 81-92. doi:10.1016/j.brainres.2007.04.082.

    Abstract

    The relationship between semantic and grammatical processing in sentence comprehension was investigated by examining event-related potential (ERP) and event-related power changes in response to semantic and grammatical violations. Sentences with semantic, phrase structure, or number violations and matched controls were presented serially (1.25 words/s) to 20 participants while EEG was recorded. Semantic violations were associated with an N400 effect and a theta band increase in power, while grammatical violations were associated with a P600 effect and an alpha/beta band decrease in power. A quartile analysis showed that for both types of violations, larger average violation effects were associated with lower relative amplitudes of oscillatory activity, implying an inverse relation between ERP amplitude and event-related power magnitude change in sentence processing.
  • Declerck, T., Cunningham, H., Saggion, H., Kuper, J., Reidsma, D., & Wittenburg, P. (2003). MUMIS - Advanced information extraction for multimedia indexing and searching digital media - Processing for multimedia interactive services. 4th European Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 553-556.
  • Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. PNAS, 104, 10944-10949. doi:10.1073/pnas.0610848104.

    Abstract

    The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal: certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.

    Files private

    Request files
  • Dediu, D. (2007). Non-spurious correlations between genetic and linguistic diversities in the context of human evolution. PhD Thesis, University of Edinburgh, Edinburgh, UK.
  • Deutsch, W., & Frauenfelder, U. (1985). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.6 1985. Nijmegen: MPI for Psycholinguistics.
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dietrich, R., Klein, W., & Noyau, C. (1995). The acquisition of temporality in a second language. Amsterdam: Benjamins.
  • Dimroth, C. (2007). Zweitspracherwerb bei Kindern und Jugendlichen: Gemeinsamkeiten und Unterschiede. In T. Anstatt (Ed.), Mehrsprachigkeit bei Kindern und Erwachsenen: Erwerb, Formen, Förderung (pp. 115-137). Tübingen: Attempto.

    Abstract

    This paper discusses the influence of age-related factors like stage of cognitive development, prior linguistic knowledge, and motivation and addresses the specific effects of these ‘age factors’ on second language acquisition as opposed to other learning tasks. Based on longitudinal corpus data from child and adolescent learners of L2 German (L1 = Russian), the paper studies the acquisition of word order (verb raising over negation, verb second) and inflectional morphology (subject-verb-agreement, tense, noun plural, and adjective-noun agreement). Whereas the child learner shows target-like production in all of these areas within the observation period (1½ years), the adolescent learner masters only some of them. The discussion addresses the question of what it is about clusters of grammatical features that make them particularly affected by age.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Dimroth, C., & Starren, M. (Eds.). (2003). Information structure and the dynamics of language acquisition. Amsterdam: John Benjamins.

    Abstract

    The papers in this volume focus on the impact of information structure on language acquisition, thereby taking different linguistic approaches into account. They start from an empirical point of view, and examine data from natural first and second language acquisition, which cover a wide range of varieties, from early learner language to native speaker production and from gesture to Creole prototypes. The central theme is the interplay between principles of information structure and linguistic structure and its impact on the functioning and development of the learner's system. The papers examine language-internal explanatory factors and in particular the communicative and structural forces that push and shape the acquisition process, and its outcome. On the theoretical level, the approach adopted appeals both to formal and communicative constraints on a learner’s language in use. Two empirical domains provide a 'testing ground' for the respective weight of grammatical versus functional determinants in the acquisition process: (1) the expression of finiteness and scope relations at the utterance level and (2) the expression of anaphoric relations at the discourse level.
  • Dimroth, C., Gretsch, P., Jordens, P., Perdue, C., & Starren, M. (2003). Finiteness in Germanic languages: A stage-model for first and second language development. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 65-94). Amsterdam: Benjamins.
  • Dimroth, C., & Starren, M. (2003). Introduction. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 1-14). Amsterdam: John Benjamins.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dittmar, N., Reich, A., Skiba, R., Schumacher, M., & Terborg, H. (1990). Die Erlernung modaler Konzepte des Deutschen durch erwachsene polnische Migranten: Eine empirische Längsschnittstudie. Informationen Deutsch als Fremdsprache: Info DaF, 17(2), 125-172.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Drolet, M., & Kempen, G. (1985). IPG: A cognitive approach to sentence generation. CCAI: The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science and Applied Epistemology, 2, 37-61.
  • Drozd, K. F. (1995). Child English pre-sentential negation as metalinguistic exclamatory sentence negation. Journal of Child Language, 22(3), 583-610. doi:10.1017/S030500090000996X.

    Abstract

    This paper presents a study of the spontaneous pre-sentential negations
    of ten English-speaking children between the ages of 1; 6 and 3; 4 which
    supports the hypothesis that child English nonanaphoric pre-sentential
    negation is a form of metalinguistic exclamatory sentence negation. A
    detailed discourse analysis reveals that children's pre-sentential negatives
    like No Nathaniel a king (i) are characteristically echoic, and (it)
    typically express objection and rectification, two characteristic functions
    of exclamatory negation in adult discourse, e.g. Don't say 'Nathaniel's a
    king'! A comparison of children's pre-sentential negations with their
    internal predicate negations using not and don't reveals that the two
    negative constructions are formally and functionally distinct. I argue
    that children's nonanaphoric pre-sentential negatives constitute an
    independent, well-formed class of discourse negation. They are not
    'primitive' constructions derived from the miscategorization of emphatic
    no in adult speech or children's 'inventions'. Nor are they an
    early derivational variant of internal sentence negation. Rather, these
    negatives reflect young children's competence in using grammatical
    negative constructions appropriately in discourse.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drude, S. (2003). Advanced glossing: A language documentation format and its implementation with Shoebox. In Proceedings of the 2002 International Conference on Language Resources and Evaluation (LREC 2002). Paris: ELRA.

    Abstract

    This paper presents Advanced Glossing, a proposal for a general glossing format designed for language documentation, and a specific setup for the Shoebox-program that implements Advanced Glossing to a large extent. Advanced Glossing (AG) goes beyond the traditional Interlinear Morphemic Translation, keeping syntactic and morphological information apart from each other in separate glossing tables. AG provides specific lines for different kinds of annotation – phonetic, phonological, orthographical, prosodic, categorial, structural, relational, and semantic, and it allows for gradual and successive, incomplete, and partial filling in case that some information may be irrelevant, unknown or uncertain. The implementation of AG in Shoebox sets up several databases. Each documented text is represented as a file of syntactic glossings. The morphological glossings are kept in a separate database. As an additional feature interaction with lexical databases is possible. The implementation makes use of the interlinearizing automatism provided by Shoebox, thus obtaining the table format for the alignment of lines in cells, and for semi-automatic filling-in of information in glossing tables which has been extracted from databases
  • Drude, S. (2003). Digitizing and annotating texts and field recordings in the Awetí project. In Proceedings of the EMELD Language Digitization Project Conference 2003. Workshop on Digitizing and Annotating Text and Field Recordings, LSA Institute, Michigan State University, July 11th -13th.

    Abstract

    Digitizing and annotating texts and field recordings Given that several initiatives worldwide currently explore the new field of documentation of endangered languages, the E-MELD project proposes to survey and unite procedures, techniques and results in order to achieve its main goal, ''the formulation and promulgation of best practice in linguistic markup of texts and lexicons''. In this context, this year's workshop deals with the processing of recorded texts. I assume the most valuable contribution I could make to the workshop is to show the procedures and methods used in the Awetí Language Documentation Project. The procedures applied in the Awetí Project are not necessarily representative of all the projects in the DOBES program, and they may very well fall short in several respects of being best practice, but I hope they might provide a good and concrete starting point for comparison, criticism and further discussion. The procedures to be exposed include: * taping with digital devices, * digitizing (preliminarily in the field, later definitely by the TIDEL-team at the Max Planck Institute in Nijmegen), * segmenting and transcribing, using the transcriber computer program, * translating (on paper, or while transcribing), * adding more specific annotation, using the Shoebox program, * converting the annotation to the ELAN-format developed by the TIDEL-team, and doing annotation with ELAN. Focus will be on the different types of annotation. Especially, I will present, justify and discuss Advanced Glossing, a text annotation format developed by H.-H. Lieb and myself designed for language documentation. It will be shown how Advanced Glossing can be applied using the Shoebox program. The Shoebox setup used in the Awetí Project will be shown in greater detail, including lexical databases and semi-automatic interaction between different database types (jumping, interlinearization). ( Freie Universität Berlin and Museu Paraense Emílio Goeldi, with funding from the Volkswagen Foundation.)
  • Duffield, N., & Matsuo, A. (2003). Factoring out the parallelism effect in ellipsis: An interactional approach? In J. Chilar, A. Franklin, D. Keizer, & I. Kimbara (Eds.), Proceedings of the 39th Annual Meeting of the Chicago Linguistic Society (CLS) (pp. 591-603). Chicago: Chicago Linguistics Society.

    Abstract

    Traditionally, there have been three standard assumptions made about the Parallelism Effect on VP-ellipsis, namely that the effect is categorical, that it applies asymmetrically and that it is uniquely due to syntactic factors. Based on the results of a series of experiments involving online and offline tasks, it will be argued that the Parallelism Effect is instead noncategorical and interactional. The factors investigated include construction type, conceptual and morpho-syntactic recoverability, finiteness and anaphor type (to test VP-anaphora). The results show that parallelism is gradient rather than categorical, effects both VP-ellipsis and anaphora, and is influenced by both structural and non-structural factors.
  • Duffield, N., Matsuo, A., & Roberts, L. (2007). Acceptable ungrammaticality in sentence matching. Second Language Research, 23(2), 155-177. doi:10.1177/0267658307076544.

    Abstract

    This paper presents results from a new set of experiments using the sentence matching paradigm (Forster, Kenneth (1979), Freedman & Forster (1985), also Bley-Vroman & Masterson (1989), investigating native-speakers’ and L2 learners’ knowledge of constraints on clitic placement in French.1 Our purpose is three-fold: (i) to shed more light on the contrasts between native-speakers and L2 learners observed in previous experiments, especially Duffield & White (1999), and Duffield, White, Bruhn de Garavito, Montrul & Prévost (2002); (ii), to address specific criticisms of the sentence-matching paradigm leveled by Gass (2001); (iii), to provide a firm empirical basis for follow-up experiments with L2 learners
  • Dunn, M. (2003). Pioneers of Island Melanesia project. Oceania Newsletter, 30/31, 1-3.

Share this page