Publications

Displaying 101 - 200 of 754
  • Chen, A., Den Os, E., & De Ruiter, J. P. (2007). Pitch accent type matters for online processing of information status: Evidence from natural and synthetic speech. The Linguistic Review, 24(2), 317-344. doi:10.1515/TLR.2007.012.

    Abstract

    Adopting an eyetracking paradigm, we investigated the role of H*L, L*HL, L*H, H*LH, and deaccentuation at the intonational phrase-final position in online processing of information status in British English in natural speech. The role of H*L, L*H and deaccentuation was also examined in diphonesynthetic speech. It was found that H*L and L*HL create a strong bias towards newness, whereas L*H, like deaccentuation, creates a strong bias towards givenness. In synthetic speech, the same effect was found for H*L, L*H and deaccentuation, but it was delayed. The delay may not be caused entirely by the difference in the segmental quality between synthetic and natural speech. The pitch accent H*LH, however, appears to bias participants' interpretation to the target word, independent of its information status. This finding was explained in the light of the effect of durational information at the segmental level on word recognition.
  • Chen, H.-C., & Cutler, A. (1997). Auditory priming in spoken and printed word recognition. In H.-C. Chen (Ed.), Cognitive processing of Chinese and related Asian languages (pp. 77-81). Hong Kong: Chinese University Press.
  • Chen, X. S., Rozhdestvensky, T. S., Collins, L. J., Schmitz, J., & Penny, D. (2007). Combined experimental and computational approach to identify non-protein-coding RNAs in the deep-branching eukaryote Giardia intestinalis. Nucleic Acids Research, 35, 4619-4628. doi:10.1093/nar/gkm474.

    Abstract

    Non-protein-coding RNAs represent a large proportion of transcribed sequences in eukaryotes. These RNAs often function in large RNA–protein complexes, which are catalysts in various RNA-processing pathways. As RNA processing has become an increasingly important area of research, numerous non-messenger RNAs have been uncovered in all the model eukaryotic organisms. However, knowledge on RNA processing in deep-branching eukaryotes is still limited. This study focuses on the identification of non-protein-coding RNAs from the diplomonad parasite Giardia intestinalis, showing that a combined experimental and computational search strategy is a fast method of screening reduced or compact genomes. The analysis of our Giardia cDNA library has uncovered 31 novel candidates, including C/D-box and H/ACA box snoRNAs, as well as an unusual transcript of RNase P, and double-stranded RNAs. Subsequent computational analysis has revealed additional putative C/D-box snoRNAs. Our results will lead towards a future understanding of RNA metabolism in the deep-branching eukaryote Giardia, as more ncRNAs are characterized.
  • Chen, J. (2007). 'He cut-break the rope': Encoding and categorizing cutting and breaking events in Mandarin. Cognitive Linguistics, 18(2), 273-285. doi:10.1515/COG.2007.015.

    Abstract

    Abstract Mandarin categorizes cutting and breaking events on the basis of fine semantic distinctions in the causal action and the caused result. I demonstrate the semantics of Mandarin C&B verbs from the perspective of event encoding and categorization as well as argument structure alternations. Three semantically different types of predicates can be identified: verbs denoting the C&B action subevent, verbs encoding the C&B result subevent, and resultative verb compounds (RVC) that encode both the action and the result subevents. The first verb of an RVC is basically dyadic, whereas the second is monadic. RVCs as a whole are also basically dyadic, and do not undergo detransitivization.
  • Chen, A., & Fikkert, P. (2007). Intonation of early two-word utterances in Dutch. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 315-320). Dudweiler: Pirrot.

    Abstract

    We analysed intonation contours of two-word utterances from three monolingual Dutch children aged between 1;4 and 2;1 in the autosegmentalmetrical framework. Our data show that children have mastered the inventory of the boundary tones and nuclear pitch accent types (except for L*HL and L*!HL) at the 160-word level, and the set of nondownstepped pre-nuclear pitch accents (except for L*) at the 230-word level, contra previous claims on the mastery of adult-like intonation contours before or at the onset of first words. Further, there is evidence that intonational development is correlated with an increase in vocabulary size. Moreover, we found that children show a preference for falling contours, as predicted on the basis of universal production mechanisms. In addition, the utterances are mostly spoken with both words accented independent of semantic relations expressed and information status of each word across developmental stages, contra prior work. Our study suggests a number of topics for further research.
  • Chen, A. (2007). Intonational realisation of topic and focus by Dutch-acquiring 4- to 5-year-olds. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1553-1556). Dudweiler: Pirott.

    Abstract

    This study examined how Dutch-acquiring 4- to 5-year-olds use different pitch accent types and deaccentuation to mark topic and focus at the sentence level and how they differ from adults. The topic and focus were non-contrastive and realised as full noun phrases. It was found that children realise topic and focus similarly frequently with H*L, whereas adults use H*L noticeably more frequently in focus than in topic in sentence-initial position and nearly only in focus in sentence-final position. Further, children frequently realise the topic with an accent, whereas adults mostly deaccent the sentence-final topic and use H*L and H* to realise the sentence-initial topic because of rhythmic motivation. These results show that 4- and 5-year-olds have not acquired H*L as the typical focus accent and deaccentuation as the typical topic intonation yet. Possibly, frequent use of H*L in sentence-initial topic in adult Dutch has made it difficult to extract the functions of H*L and deaccentuation from the input.
  • Chen, A. (2007). Language-specificity in the perception of continuation intonation. In C. Gussenhoven, & T. Riad (Eds.), Tones and tunes II: Phonetic and behavioural studies in word and sentence prosody (pp. 107-142). Berlin: Mouton de Gruyter.

    Abstract

    This paper addressed the question of how British English, German and Dutch listeners differ in their perception of continuation intonation both at the phonological level (Experiment 1) and at the level of phonetic implementation (Experiment 2). In Experiment 1, preference scores of pitch contours to signal continuation at the clause-boundary were obtained from these listener groups. It was found that among contours with H%, British English listeners had a strong preference for H*L H%, as predicted. Unexpectedly, British English listeners rated H* H% noticeably more favourably than L*H H%; Dutch listeners largely rated H* H% more favourably than H*L H% and L*H H%; German listeners rated these contours similarly and seemed to have a slight preference for H*L H%. In Experiment 2, the degree to which a final rise was perceived to express continuation was established for each listener group in a made-up language. It was found that although all listener groups associated a higher end pitch with a higher degree of continuation likelihood, the perceived meaning difference for a given interval of end pitch heights varied with the contour shape of the utterance final syllable. When it was comparable to H* H%, British English and Dutch listeners perceived a larger meaning difference than German listeners; when it was comparable to H*L H%, British English listeners perceived a larger difference than German and Dutch listeners. This shows that language-specificity in continuation intonation at the phonological level affects the perception of continuation intonation at the phonetic level.
  • Cho, T., McQueen, J. M., & Cox, E. A. (2007). Prosodically driven phonetic detail in speech processing: The case of domain-initial strengthening in English. Journal of Phonetics, 35(2), 210-243. doi:10.1016/j.wocn.2006.03.003.

    Abstract

    We explore the role of the acoustic consequences of domain-initial strengthening in spoken-word recognition. In two cross-modal identity-priming experiments, listeners heard sentences and made lexical decisions to visual targets, presented at the onset of the second word in two-word sequences containing lexical ambiguities (e.g., bus tickets, with the competitor bust). These sequences contained Intonational Phrase (IP) or Prosodic Word (Wd) boundaries, and the second word's initial Consonant and Vowel (CV, e.g., [tI]) was spliced from another token of the sequence in IP- or Wd-initial position. Acoustic analyses showed that IP-initial consonants were articulated more strongly than Wd-initial consonants. In Experiment 1, related targets were post-boundary words (e.g., tickets). No strengthening effect was observed (i.e., identity priming effects did not vary across splicing conditions). In Experiment 2, related targets were pre-boundary words (e.g., bus). There was a strengthening effect (stronger priming when the post-boundary CVs were spliced from IP-initial than from Wd-initial position), but only in Wd-boundary contexts. These were the conditions where phonetic detail associated with domain-initial strengthening could assist listeners most in lexical disambiguation. We discuss how speakers may strengthen domain-initial segments during production and how listeners may use the resulting acoustic correlates of prosodic strengthening during word recognition.
  • Christoffels, I. K., Formisano, E., & Schiller, N. O. (2007). The neural correlates of verbal feedback processing: An fMRI study employing overt speech. Human Brain Mapping, 28(9), 868-879. doi:10.1002/hbm.20315.

    Abstract

    Speakers use external auditory feedback to monitor their own speech. Feedback distortion has been found to increase activity in the superior temporal areas. Using fMRI, the present study investigates the neural correlates of processing verbal feedback without distortion. In a blocked design, the following conditions were presented: (1) overt picture-naming, (2) overt picture-naming while pink noise was presented to mask external feedback, (3) covert picture-naming, (4) listening to the picture names (previously recorded from participants' own voices), and (5) listening to pink noise. The results show that auditory feedback processing involves a network of different areas related to general performance monitoring and speech-motor control. These include the cingulate cortex and the bilateral insula, supplementary motor area, bilateral motor areas, cerebellum, thalamus and basal ganglia. Our findings suggest that the anterior cingulate cortex, which is often implicated in error-processing and conflict-monitoring, is also engaged in ongoing speech monitoring. Furthermore, in the superior temporal gyrus, we found a reduced response to speaking under normal feedback conditions. This finding is interpreted in the framework of a forward model according to which, during speech production, the sensory consequence of the speech-motor act is predicted to attenuate the sensitivity of the auditory cortex. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc.
  • Christoffels, I. K., Firk, C., & Schiller, N. O. (2007). Bilingual language control: An event-related brain potential study. Brain Research, 1147, 192-208. doi:10.1016/j.brainres.2007.01.137.

    Abstract

    This study addressed how bilingual speakers switch between their first and second language when speaking. Event-related brain potentials (ERPs) and naming latencies were measured while unbalanced German (L1)-Dutch (L2) speakers performed a picture-naming task. Participants named pictures either in their L1 or in their L2 (blocked language conditions), or participants switched between their first and second language unpredictably (mixed language condition). Furthermore, form similarity between translation equivalents (cognate status) was manipulated. A cognate facilitation effect was found for L1 and L2 indicating phonological activation of the non-response language in blocked and mixed language conditions. The ERP data also revealed small but reliable effects of cognate status. Language switching resulted in equal switching costs for both languages and was associated with a modulation in the ERP waveforms (time windows 275-375 ms and 375-475 ms). Mixed language context affected especially the L1, both in ERPs and in latencies, which became slower in L1 than L2. It is suggested that sustained and transient components of language control should be distinguished. Results are discussed in relation to current theories of bilingual language processing.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Chwilla, D., Brown, C. M., & Hagoort, P. (1995). The N400 as a function of the level of processing. Psychophysiology, 32, 274-285. doi:10.1111/j.1469-8986.1995.tb02956.x.

    Abstract

    In a semantic priming paradigm, the effects of different levels of processing on the N400 were assessed by changing the task demands. In the lexical decision task, subjects had to discriminate between words and nonwords and in the physical task, subjects had to discriminate between uppercase and lowercase letters. The proportion of related versus unrelated word pairs differed between conditions. A lexicality test on reaction times demonstrated that the physical task was performed nonlexically. Moreover, a semantic priming reaction time effect was obtained only in the lexical decision task. The level of processing clearly affected the event-related potentials. An N400 priming effect was only observed in the lexical decision task. In contrast, in the physical task a P300 effect was observed for either related or unrelated targets, depending on their frequency of occurrence. Taken together, the results indicate that an N400 priming effect is only evoked when the task performance induces the semantic aspects of words to become part of an episodic trace of the stimulus event.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., & Allen, S. E. M. (1997). Linguistic and cultural aspects of simplicity and complexity in Inuktitut child directed speech. In E. Hughes, M. Hughes, & A. Greenhill (Eds.), Proceedings of the 21st annual Boston University Conference on Language Development (pp. 91-102).
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Allen, S. E. M., & Hough-Eyamie, W. P. (1997). Exploring innateness through cultural and linguistic variation. In M. Gopnik (Ed.), The inheritance and innateness of grammars (pp. 70-90). New York City, NY, USA: Oxford University Press, Inc.
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A., Wales, R., Cooper, N., & Janssen, J. (2007). Dutch listeners' use of suprasegmental cues to English stress. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetics Sciences (ICPhS 2007) (pp. 1913-1916). Dudweiler: Pirrot.

    Abstract

    Dutch listeners outperform native listeners in identifying syllable stress in English. This is because lexical stress is more useful in recognition of spoken words of Dutch than of English, so that Dutch listeners pay greater attention to stress in general. We examined Dutch listeners’ use of the acoustic correlates of English stress. Primary- and secondary-stressed syllables differ significantly on acoustic measures, and some differences, in F0 especially, correlate with data of earlier listening experiments. The correlations found in the Dutch responses were not paralleled in data from native listeners. Thus the acoustic cues which distinguish English primary versus secondary stress are better exploited by Dutch than by native listeners.
  • Cutler, A., & Weber, A. (2007). Listening experience and phonetic-to-lexical mapping in L2. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 43-48). Dudweiler: Pirrot.

    Abstract

    In contrast to initial L1 vocabularies, which of necessity depend largely on heard exemplars, L2 vocabulary construction can draw on a variety of knowledge sources. This can lead to richer stored knowledge about the phonology of the L2 than the listener's prelexical phonetic processing capacity can support, and thus to mismatch between the level of detail required for accurate lexical mapping and the level of detail delivered by the prelexical processor. Experiments on spoken word recognition in L2 have shown that phonetic contrasts which are not reliably perceived are represented in the lexicon nonetheless. This lexical representation of contrast must be based on abstract knowledge, not on veridical representation of heard exemplars. New experiments confirm that provision of abstract knowledge (in the form of spelling) can induce lexical representation of a contrast which is not reliably perceived; but also that experience (in the form of frequency of occurrence) modulates the mismatch of phonetic and lexical processing. We conclude that a correct account of word recognition in L2 (as indeed in L1) requires consideration of both abstract and episodic information.
  • Cutler, A., Cooke, M., Garcia-Lecumberri, M. L., & Pasveer, D. (2007). L2 consonant identification in noise: Cross-language comparisons. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1585-1588). Adelaide: Causal productions.

    Abstract

    The difficulty of listening to speech in noise is exacerbated when the speech is in the listener’s L2 rather than L1. In this study, Spanish and Dutch users of English as an L2 identified American English consonants in a constant intervocalic context. Their performance was compared with that of L1 (British English) listeners, under quiet conditions and when the speech was masked by speech from another talker or by noise. Masking affected performance more for the Spanish listeners than for the L1 listeners, but not for the Dutch listeners, whose performance was worse than the L1 case to about the same degree in all conditions. There were, however,large differences in the pattern of results across individual consonants, which were consistent with differences in how consonants are identified in the respective L1s.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A. (2000). How the ear comes to hear. In New Trends in Modern Linguistics [Part of Annual catalogue series] (pp. 6-10). Tokyo, Japan: Maruzen Publishers.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A. (2000). Hoe het woord het oor verovert. In Voordrachten uitgesproken tijdens de uitreiking van de SPINOZA-premies op 15 februari 2000 (pp. 29-41). The Hague, The Netherlands: Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A., & Chen, H.-C. (1995). Phonological similarity effects in Cantonese word recognition. In K. Elenius, & P. Branderud (Eds.), Proceedings of the Thirteenth International Congress of Phonetic Sciences: Vol. 1 (pp. 106-109). Stockholm: Stockholm University.

    Abstract

    Two lexical decision experiments in Cantonese are described in which the recognition of spoken target words as a function of phonological similarity to a preceding prime is investigated. Phonological similaritv in first syllables produced inhibition, while similarity in second syllables led to facilitation. Differences between syllables in tonal and segmental structure had generally similar effects.
  • Cutler, A., McQueen, J. M., & Zondervan, R. (2000). Proceedings of SWAP (Workshop on Spoken Word Access Processes). Nijmegen: MPI for Psycholinguistics.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1982). Prosody and sentence perception in English. In J. Mehler, E. C. Walker, & M. Garrett (Eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capacities (pp. 201-216). Hillsdale, N.J: Erlbaum.
  • Cutler, A. (1997). Prosody and the structure of the message. In Y. Sagisaka, N. Campbell, & N. Higuchi (Eds.), Computing prosody: Computational models for processing spontaneous speech (pp. 63-66). Heidelberg: Springer.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A. (2000). Real words, phantom words and impossible words. In D. Burnham, S. Luksaneeyanawin, C. Davis, & M. Lafourcade (Eds.), Interdisciplinary approaches to language processing: The international conference on human and machine processing of language and speech (pp. 32-42). Bangkok: NECTEC.
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A. (1975). Sentence stress and sentence comprehension. PhD Thesis, University of Texas, Austin.
  • Cutler, A. (Ed.). (1982). Slips of the tongue and language production. The Hague: Mouton.
  • Cutler, A. (1982). Speech errors: A classified bibliography. Bloomington: Indiana University Linguistics Club.
  • Cutler, A. (1995). Spoken word recognition and production. In J. L. Miller, & P. D. Eimas (Eds.), Speech, language and communication (pp. 97-136). New York: Academic Press.

    Abstract

    This chapter highlights that most language behavior consists of speaking and listening. The chapter also reveals differences and similarities between speaking and listening. The laboratory study of word production raises formidable problems; ensuring that a particular word is produced may subvert the spontaneous production process. Word production is investigated via slips and tip-of-the-tongue (TOT), primarily via instances of processing failure and via the technique of via the picture-naming task. The methodology of word production is explained in the chapter. The chapter also explains the phenomenon of interaction between various stages of word production and the process of speech recognition. In this context, it explores the difference between sound and meaning and examines whether or not the comparisons are appropriate between the processes of recognition and production of spoken words. It also describes the similarities and differences in the structure of the recognition and production systems. Finally, the chapter highlights the common issues in recognition and production research, which include the nuances of frequency of occurrence, morphological structure, and phonological structure.
  • Cutler, A. (1995). Spoken-word recognition. In G. Bloothooft, V. Hazan, D. Hubert, & J. Llisterri (Eds.), European studies in phonetics and speech communication (pp. 66-71). Utrecht: OTS.
  • Cutler, A., & Koster, M. (2000). Stress and lexical activation in Dutch. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 1 (pp. 593-596). Beijing: China Military Friendship Publish.

    Abstract

    Dutch listeners were slower to make judgements about the semantic relatedness between a spoken target word (e.g. atLEET, 'athlete') and a previously presented visual prime word (e.g. SPORT 'sport') when the spoken word was mis-stressed. The adverse effect of mis-stressing confirms the role of stress information in lexical recognition in Dutch. However, although the erroneous stress pattern was always initially compatible with a competing word (e.g. ATlas, 'atlas'), mis-stressed words did not produced high false alarm rates in unrelated pairs (e.g. SPORT - atLAS). This suggests that stress information did not completely rule out segmentally matching but suprasegmentally mismatching words, a finding consistent with spoken-word recognition models involving multiple activation and inter-word competition.
  • Cutler, A. (1995). The perception of rhythm in spoken and written language. In J. Mehler, & S. Franck (Eds.), Cognition on cognition (pp. 283-288). Cambridge, MA: MIT Press.
  • Cutler, A., & McQueen, J. M. (1995). The recognition of lexical units in speech. In B. De Gelder, & J. Morais (Eds.), Speech and reading: A comparative approach (pp. 33-47). Hove, UK: Erlbaum.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A. (1997). The syllable’s role in the segmentation of stress languages. Language and Cognitive Processes, 12, 839-845. doi:10.1080/016909697386718.
  • Cutler, A., & Fay, D. (1975). You have a Dictionary in your Head, not a Thesaurus. Texas Linguistic Forum, 1, 27-40.
  • Cutler, A., Norris, D., & McQueen, J. M. (2000). Tracking TRACE’s troubles. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 63-66). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of acoustic-phonetic mismatches in word forms. The source of TRACE's failure lay not in its interactive connectivity, not in the presence of interword competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model.
  • Cutler, A. (1995). Universal and Language-Specific in the Development of Speech. Biology International, (Special Issue 33).
  • Dahan, D., & Gaskell, M. G. (2007). The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition. Journal of Memory and Language, 57(4), 483-501. doi:10.1016/j.jml.2007.01.001.

    Abstract

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants’ responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
  • Danziger, E. (1995). Intransitive predicate form class survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 46-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004298.

    Abstract

    Different linguistic structures allow us to highlight distinct aspects of a situation. The aim of this survey is to investigate similarities and differences in the expression of situations or events as “stative” (maintaining a state), “inchoative” (adopting a state) and “agentive” (causing something to be in a state). The questionnaire focuses on the encoding of stative, inchoative and agentive possibilities for the translation equivalents of a set of English verbs.
  • Danziger, E. (1995). Posture verb survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 33-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004235.

    Abstract

    Expressions of human activities and states are a rich area for cross-linguistic comparison. Some languages of the world treat human posture verbs (e.g., sit, lie, kneel) as a special class of predicates, with distinct formal properties. This survey examines lexical, semantic and grammatical patterns for posture verbs, with special reference to contrasts between “stative” (maintaining a posture), “inchoative” (adopting a posture), and “agentive” (causing something to adopt a posture) constructions. The enquiry is thematically linked to the more general questionnaire 'Intransitive Predicate Form Class Survey'.
  • Davidson, D. J., & Indefrey, P. (2007). An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Research, 1158, 81-92. doi:10.1016/j.brainres.2007.04.082.

    Abstract

    The relationship between semantic and grammatical processing in sentence comprehension was investigated by examining event-related potential (ERP) and event-related power changes in response to semantic and grammatical violations. Sentences with semantic, phrase structure, or number violations and matched controls were presented serially (1.25 words/s) to 20 participants while EEG was recorded. Semantic violations were associated with an N400 effect and a theta band increase in power, while grammatical violations were associated with a P600 effect and an alpha/beta band decrease in power. A quartile analysis showed that for both types of violations, larger average violation effects were associated with lower relative amplitudes of oscillatory activity, implying an inverse relation between ERP amplitude and event-related power magnitude change in sentence processing.
  • Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. PNAS, 104, 10944-10949. doi:10.1073/pnas.0610848104.

    Abstract

    The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal: certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.

    Files private

    Request files
  • Dediu, D. (2007). Non-spurious correlations between genetic and linguistic diversities in the context of human evolution. PhD Thesis, University of Edinburgh, Edinburgh, UK.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dietrich, R., Klein, W., & Noyau, C. (1995). The acquisition of temporality in a second language. Amsterdam: Benjamins.
  • Dijkstra, T., & Kempen, G. (1997). Het taalgebruikersmodel. In H. Hulshof, & T. Hendrix (Eds.), De taalcentrale. Amsterdam: Bulkboek.
  • Dimroth, C. (2007). Zweitspracherwerb bei Kindern und Jugendlichen: Gemeinsamkeiten und Unterschiede. In T. Anstatt (Ed.), Mehrsprachigkeit bei Kindern und Erwachsenen: Erwerb, Formen, Förderung (pp. 115-137). Tübingen: Attempto.

    Abstract

    This paper discusses the influence of age-related factors like stage of cognitive development, prior linguistic knowledge, and motivation and addresses the specific effects of these ‘age factors’ on second language acquisition as opposed to other learning tasks. Based on longitudinal corpus data from child and adolescent learners of L2 German (L1 = Russian), the paper studies the acquisition of word order (verb raising over negation, verb second) and inflectional morphology (subject-verb-agreement, tense, noun plural, and adjective-noun agreement). Whereas the child learner shows target-like production in all of these areas within the observation period (1½ years), the adolescent learner masters only some of them. The discussion addresses the question of what it is about clusters of grammatical features that make them particularly affected by age.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dittmar, N., & Klein, W. (1975). Untersuchungen zum Pidgin-Deutsch spanischer und italienischer Arbeiter in der Bundesrepublik: Ein Arbeitsbericht. In A. Wierlacher (Ed.), Jahrbuch Deutsch als Fremdsprache (pp. 170-194). Heidelberg: Groos.
  • Drozd, K. F. (1995). Child English pre-sentential negation as metalinguistic exclamatory sentence negation. Journal of Child Language, 22(3), 583-610. doi:10.1017/S030500090000996X.

    Abstract

    This paper presents a study of the spontaneous pre-sentential negations
    of ten English-speaking children between the ages of 1; 6 and 3; 4 which
    supports the hypothesis that child English nonanaphoric pre-sentential
    negation is a form of metalinguistic exclamatory sentence negation. A
    detailed discourse analysis reveals that children's pre-sentential negatives
    like No Nathaniel a king (i) are characteristically echoic, and (it)
    typically express objection and rectification, two characteristic functions
    of exclamatory negation in adult discourse, e.g. Don't say 'Nathaniel's a
    king'! A comparison of children's pre-sentential negations with their
    internal predicate negations using not and don't reveals that the two
    negative constructions are formally and functionally distinct. I argue
    that children's nonanaphoric pre-sentential negatives constitute an
    independent, well-formed class of discourse negation. They are not
    'primitive' constructions derived from the miscategorization of emphatic
    no in adult speech or children's 'inventions'. Nor are they an
    early derivational variant of internal sentence negation. Rather, these
    negatives reflect young children's competence in using grammatical
    negative constructions appropriately in discourse.
  • Drozd, K., & Van de Weijer, J. (Eds.). (1997). Max Planck Institute for Psycholinguistics: Annual report 1997. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drude, S. (1997). Wörterbücher, integrativ interpretiert, am Beispiel des Guaraní. Magister Thesis, Freie Universität Berlin.
  • Duffield, N., Matsuo, A., & Roberts, L. (2007). Acceptable ungrammaticality in sentence matching. Second Language Research, 23(2), 155-177. doi:10.1177/0267658307076544.

    Abstract

    This paper presents results from a new set of experiments using the sentence matching paradigm (Forster, Kenneth (1979), Freedman & Forster (1985), also Bley-Vroman & Masterson (1989), investigating native-speakers’ and L2 learners’ knowledge of constraints on clitic placement in French.1 Our purpose is three-fold: (i) to shed more light on the contrasts between native-speakers and L2 learners observed in previous experiments, especially Duffield & White (1999), and Duffield, White, Bruhn de Garavito, Montrul & Prévost (2002); (ii), to address specific criticisms of the sentence-matching paradigm leveled by Gass (2001); (iii), to provide a firm empirical basis for follow-up experiments with L2 learners
  • Dunn, M., Foley, R., Levinson, S. C., Reesink, G., & Terrill, A. (2007). Statistical reasoning in the evaluation of typological diversity in Island Melanesia. Oceanic Linguistics, 46(2), 388-403.

    Abstract

    This paper builds on a previous work in which we attempted to retrieve a phylogenetic signal using abstract structural features alone, as opposed to cognate sets, drawn from a sample of Island Melanesian languages, both Oceanic (Austronesian) and (non-Austronesian) Papuan (Science 2005[309]: 2072-75 ). Here we clarify a number of misunderstandings of this approach, referring particularly to the critique by Mark Donohue and Simon Musgrave (in this same issue of Oceanic Linguistics), in which they fail to appreciate the statistical principles underlying computational phylogenetic methods. We also present new analyses that provide stronger evidence supporting the hypotheses put forward in our original paper: a reanalysis using Bayesian phylogenetic inference demonstrates the robustness of the data and methods, and provides a substantial improvement over the parsimony method used in our earlier paper. We further demonstrate, using the technique of spatial autocorrelation, that neither proximity nor Oceanic contact can be a major determinant of the pattern of structural variation of the Papuan languages, and thus that the phylogenetic relatedness of the Papuan languages remains a serious hypothesis.
  • Dunn, M. (2007). Vernacular literacy in the Touo language of the Solomon Islands. In A. J. Liddicoat (Ed.), Language planning and policy: Issues in language planning and literacy (pp. 209-220). Clevedon: Multilingual matters.

    Abstract

    The Touo language is a non-Austronesian language spoken on Rendova Island (Western Province, Solomon Islands). First language speakers of Touo are typically multilingual, and are likely to speak other (Austronesian) vernaculars, as well as Solomon Island Pijin and English. There is no institutional support of literacy in Touo: schools function in English, and church-based support for vernacular literacy focuses on the major Austronesian languages of the local area. Touo vernacular literacy exists in a restricted niche of the linguistic ecology, where it is utilised for symbolic rather than communicative goals. Competing vernacular orthographic traditions complicate the situation further.
  • Dunn, M., Margetts, A., Meira, S., & Terrill, A. (2007). Four languages from the lower end of the typology of locative predication. Linguistics, 45, 873-892. doi:10.1515/LING.2007.026.

    Abstract

    As proposed by Ameka and Levinson (this issue) locative verb systems can be classified into four types according to the number of verbs distinguished. This article addresses the lower extreme of this typology: languages which offer no choice of verb in the basic locative function (BLF). These languages have either a single locative verb, or do not use verbs at all in the basic locative construction (BLC, the construction used to encode the BLF). A close analysis is presented of the behavior of BLF predicate types in four genetically diverse languages: Chukchi (Chukotko-Kamchatkan, Russian Arctic), and Lavukaleve (Papuan isolate, Solomon Islands), which have BLC with the normal copula/existential verb for the language; Tiriyó (Cariban/Taranoan, Brazil), which has an optional copula in the BLC; and Saliba (Austronesian/Western Oceanic, Papua New Guinea), a language with a verbless clause as the BLC. The status of these languages in the typology of positional verb systems is reviewed, and other relevant typological generalizations are discussed
  • Dunn, M., & Ross, M. (2007). Is Kazukuru really non-Austronesian? Oceanic Linguistics, 46(1), 210-231. doi:10.1353/ol.2007.0018.

    Abstract

    Kazukuru is an extinct language, originally spoken in the inland of the western part of the island of New Georgia, Solomon Islands, and attested by very limited historical sources. Kazukuru has generally been considered to be a Papuan, that is, non-Austronesian, language, mostly on the basis of its lexicon. Reevaluation of the available data suggests a high likelihood that Kazukuru was in fact an Oceanic Austronesian language. Pronominal paradigms are clearly of Austronesian origin, and many other aspects of language structured retrievable from the limited data are also congruent with regional Oceanic Austronesian typology. The extent and possible causes of Kazukuru lexical deviations from the Austronesian norm are evaluated and discussed.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Ehrich, V., & Levelt, W. J. M. (Eds.). (1982). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.3 1982. Nijmegen: MPI for Psycholinguistics.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eisenbeiss, S. (2000). The acquisition of Determiner Phrase in German child language. In M.-A. Friedemann, & L. Rizzi (Eds.), The Acquisition of Syntax (pp. 26-62). Harlow, UK: Pearson Education Ltd.
  • Enfield, N. J., & Stivers, T. (Eds.). (2007). Person reference in interaction: Linguistic, cultural, and social perspectives. Cambridge: Cambridge University Press.

    Abstract

    How do we refer to people in everyday conversation? No matter the language or culture, we must choose from a range of options: full name ('Robert Smith'), reduced name ('Bob'), description ('tall guy'), kin term ('my son') etc. Our choices reflect how we know that person in context, and allow us to take a particular perspective on them. This book brings together a team of leading linguists, sociologists and anthropologists to show that there is more to person reference than meets the eye. Drawing on video-recorded, everyday interactions in nine languages, it examines the fascinating ways in which we exploit person reference for social and cultural purposes, and reveals the underlying principles of person reference across cultures from the Americas to Asia to the South Pacific. Combining rich ethnographic detail with cross-linguistic generalizations.
  • Enfield, N. J., Kita, S., & De Ruiter, J. P. (2007). Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics, 39(10), 1722-1741. doi:10.1016/j.pragma.2007.03.001.

    Abstract

    This article presents a study of a set of pointing gestures produced together with speech in a corpus of video-recorded “locality description” interviews in rural Laos. In a restricted set of the observed gestures (we did not consider gestures with special hand shapes, gestures with arc/tracing motion, or gestures directed at referents within physical reach), two basic formal types of pointing gesture are observed: B-points (large movement, full arm, eye gaze often aligned) and S-points (small movement, hand only, casual articulation). Taking the approach that speech and gesture are structurally integrated in composite utterances, we observe that these types of pointing gesture have distinct pragmatic functions at the utterance level. One type of gesture (usually “big” in form) carries primary, informationally foregrounded information (for saying “where” or “which one”). Infants perform this type of gesture long before they can talk. The second type of gesture (usually “small” in form) carries secondary, informationally backgrounded information which responds to a possible but uncertain lack of referential common ground. We propose that the packaging of the extra locational information into a casual gesture is a way of adding extra information to an utterance without it being on-record that the added information was necessary. This is motivated by the conflict between two general imperatives of communication in social interaction: a social-affiliational imperative not to provide more information than necessary (“Don’t over-tell”), and an informational imperative not to provide less information than necessary (“Don’t under-tell”).
  • Enfield, N. J., Levinson, S. C., De Ruiter, J. P., & Stivers, T. (2007). Building a corpus of multimodal interaction in your field site. In A. Majid (Ed.), Field Manual Volume 10 (pp. 96-99). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468728.

    Abstract

    Research on video- and audio-recordings of spontaneous naturally-occurring conversation in English has shown that conversation is a rule-guided, practice-oriented domain that can be investigated for its underlying mechanics or structure. Systematic study could yield something like a grammar for conversation. The goal of this task is to acquire a corpus of video-data, for investigating the underlying structure(s) of interaction cross-linguistically and cross-culturally.
  • Enfield, N. J. (2007). Encoding three-participant events in the Lao clause. Linguistics, 45(3), 509-538. doi:10.1515/LING.2007.016.

    Abstract

    Any language will have a range of predicates that specify three core participants (e.g. 'put', 'show', 'give'), and will conventionally provide a range of constructional types for the expression of these three participants in a structured single-clause or single-sentence event description. This article examines the clausal encoding of three-participant events in Lao, a Tai language of Southeast Asia. There is no possibility in Lao for expression of three full arguments in the core of a single-verb clause (although it is possible to have a third argument in a noncore slot, marked as oblique with a prepositionlike element). Available alternatives include extraposing an argument using a topic-comment construction, incorporating an argument into the verb phrase, and ellipsing one or more contextually retrievable arguments. A more common strategy is verb serialization, for example, where a threeplace verb (e.g. 'put') is assisted by an additional verb (typically a verb of handling such as 'carry') that provides a slot for the theme argument (e.g. the transferred object in a putting scene). The event construal encoded by this type of structure decomposes the event into a first stage in which the agent comes into control over a theme, and a second in which the agent performs a controlled action (e.g. of transfer) with respect to that theme and a goal (and/or source). The particular set of strategies that Lao offers for encoding three-participant events — notably, topic-comment strategy, ellipsis strategy, serial verb strategy — conform with (and are presumably motivated by) the general typological profile of the language. The typological features of Lao are typical for the mainland Southeast Asia area (isolating, topic-prominent, verb-serializing, widespread nominal ellipsis).
  • Enfield, N. J. (2007). A grammar of Lao. Berlin: Mouton de Gruyter.

    Abstract

    Lao is the national language of Laos, and is also spoken widely in Thailand and Cambodia. It is a tone language of the Tai-Kadai family (Southwestern Tai branch). Lao is an extreme example of the isolating, analytic language type. This book is the most comprehensive grammatical description of Lao to date. It describes and analyses the important structures of the language, including classifiers, sentence-final particles, and serial verb constructions. Special attention is paid to grammatical topics from a semantic, pragmatic, and typological perspective.
  • Enfield, N. J. (2007). [Comment on 'Agency' by Paul Kockelman]. Current Anthropology, 48(3), 392-392. doi:10.1086/512998.
  • Enfield, N. J. (2007). [review of the book Ethnopragmatics: Understanding discourse in cultural context ed. by Cliff Goddard]. Intercultural Pragmatics, 4(3), 419-433. doi:10.1515/IP.2007.021.
  • Enfield, N. J. (2007). Meanings of the unmarked: How 'default' person reference does more than just refer. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 97-120). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2007). Lao separation verbs and the logic of linguistic event categorization. Cognitive Linguistics, 18(2), 287-296. doi:10.1515/COG.2007.016.

    Abstract

    While there are infinite conceivable events of material separation, those actually encoded in the conventions of a given language's verb semantics number only a few. Furthermore, there appear to be crosslinguistic parallels in the native verbal analysis of this conceptual domain. What are the operative distinctions, and why these? This article analyses a key subset of the bivalent (transitive) verbs of cutting and breaking in Lao. I present a decompositional analysis of the verbs glossed 'cut (off)', 'cut.into.with.placed.blade', 'cut.into.with.moving.blade', and 'snap', pursuing the idea that the attested combinations of sub-events have a natural logic to them. Consideration of the nature of linguistic categories, as distinct from categories in general, suggests that the attested distinctions must have ethnographic and social interactional significance, raising new lines of research for cognitive semantics.
  • Enfield, N. J. (2000). On linguocentrism. In M. Pütz, & M. H. Verspoor (Eds.), Explorations in linguistic relativity (pp. 125-157). Amsterdam: Benjamins.
  • Enfield, N. J. (2007). Repair sequences in interaction. In A. Majid (Ed.), Field Manual Volume 10 (pp. 100-103). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.468724.

    Abstract

    This sub-project is concerned with analysis and cross-linguistic comparison of the mechanisms of signaling and redressing ‘trouble’ during conversation. Speakers and listeners constantly face difficulties with many different aspects of speech production and comprehension during conversation. A speaker may mispronounce a word, or may be unable to find a word, or be unable to formulate in words an idea he or she has in mind. A listener may have troubling hearing (part of) what was said, may not know who a speaker is referring to, may not be sure of the current relevance of what is being said. There may be problems in the organisation of turns at talk, for instance, two speakers’ speech may be in overlap. The goal of this task is to investigate the range of practices that a language uses to address problems of speaking, hearing and understanding in conversation.
  • Enfield, N. J. (1997). Review of 'Give: a cognitive linguistic study', by John Newman. Australian Journal of Linguistics, 17(1), 89-92. doi:10.1080/07268609708599546.
  • Enfield, N. J. (1997). Review of 'Plastic glasses and church fathers: semantic extension from the ethnoscience tradition', by David Kronenfeld. Anthropological Linguistics, 39(3), 459-464. Retrieved from http://www.jstor.org/stable/30028999.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Enfield, N. J., & Evans, G. (2000). Transcription as standardisation: The problem of Tai languages. In S. Burusphat (Ed.), Proceedings: the International Conference on Tai Studies, July 29-31, 1998, (pp. 201-212). Bangkok, Thailand: Institute of Language and Culture for Rural Development, Mahidol University.
  • Ernestus, M., Van Mulken, M., & Baayen, R. H. (2007). Ridders en heiligen in tijd en ruimte: Moderne stylometrische technieken toegepast op Oud-Franse teksten. Taal en Tongval, 58, 1-83.

    Abstract

    This article shows that Old-French literary texts differ systematically in their relative frequencies of syntactic constructions. These frequencies reflect differences in register (poetry versus prose), region (Picardy, Champagne, and Esatern France), time period (until 1250, 1251 – 1300, 1301 – 1350), and genre (hagiography, romance of chivalry, or other).
  • Ernestus, M., & Baayen, R. H. (2007). Paradigmatic effects in auditory word recognition: The case of alternating voice in Dutch. Language and Cognitive Processes, 22(1), 1-24. doi:10.1080/01690960500268303.

    Abstract

    Two lexical decision experiments addressed the role of paradigmatic effects in auditory word recognition. Experiment 1 showed that listeners classified a form with an incorrectly voiced final obstruent more readily as a word if the obstruent is realised as voiced in other forms of that word's morphological paradigm. Moreover, if such was the case, the exact probability of paradigmatic voicing emerged as a significant predictor of the response latencies. A greater probability of voicing correlated with longer response latencies for words correctly realised with voiceless final obstruents. A similar effect of this probability was observed in Experiment 2 for words with completely voiceless or weakly voiced (incompletely neutralised) final obstruents. These data demonstrate the relevance of paradigmatically related complex words for the processing of morphologically simple words in auditory word recognition.
  • Ernestus, M., & Baayen, R. H. (2007). The comprehension of acoustically reduced morphologically complex words: The roles of deletion, duration, and frequency of occurence. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhs 2007) (pp. 773-776). Dudweiler: Pirrot.

    Abstract

    This study addresses the roles of segment deletion, durational reduction, and frequency of use in the comprehension of morphologically complex words. We report two auditory lexical decision experiments with reduced and unreduced prefixed Dutch words. We found that segment deletions as such delayed comprehension. Simultaneously, however, longer durations of the different parts of the words appeared to increase lexical competition, either from the word’s stem (Experiment 1) or from the word’s morphological continuation forms (Experiment 2). Increased lexical competition slowed down especially the comprehension of low frequency words, which shows that speakers do not try to meet listeners’ needs when they reduce especially high frequency words.
  • Ernestus, M., & Baayen, R. H. (2007). Intraparadigmatic effects on the perception of voice. In J. van de Weijer, & E. J. van der Torre (Eds.), Voicing in Dutch: (De)voicing-phonology, phonetics, and psycholinguistics (pp. 153-173). Amsterdam: Benjamins.

    Abstract

    In Dutch, all morpheme-final obstruents are voiceless in word-final position. As a consequence, the distinction between obstruents that are voiced before vowel-initial suffixes and those that are always voiceless is neutralized. This study adds to the existing evidence that the neutralization is incomplete: neutralized, alternating plosives tend to have shorter bursts than non-alternating plosives. Furthermore, in a rating study, listeners scored the alternating plosives as more voiced than the nonalternating plosives, showing sensitivity to the subtle subphonemic cues in the acoustic signal. Importantly, the participants who were presented with the complete words, instead of just the final rhymes, scored the alternating plosives as even more voiced. This shows that listeners’ perception of voice is affected by their knowledge of the obstruent’s realization in the word’s morphological paradigm. Apparently, subphonemic paradigmatic levelling is a characteristic of both production and perception. We explain the effects within an analogy-based approach.

Share this page