Comprehension Dept Publications

Publications Language and Comprehension

Displaying 241 - 260 of 836
  • Tuinman, A., & Cutler, A. (2010). Casual speech processes: L1 knowledge and L2 speech perception. Talk presented at Sixth International Symposium on the Acquisition of Second Language Speech [New Sounds 2010]. Poznań, Poland. 2010-05-01 - 2010-05-03.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word learning in nine-month-olds: Dynamics of picture-word priming. Talk presented at 8th Sepex conference / 1st Joint conference of the EPS and SEPEX. Granada, Spain. 2010-04.

    Abstract

    How do infants learn words? Most studies focus on novel word learning to address this question. Only a few studies concentrate on the stage when infants learn their first words. Schafer (2005) showed that 12‐month‐olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on. What happens in the brain during such a training? With event‐related potentials, we studied the effect of training context on word comprehension. 24 Normal‐developing Dutch nine‐month‐olds (± 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15‐months‐olds. All trials consisted of a high‐resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training‐test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. We manipulated the type/token ratio of the training context (one versus six exemplars). Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture‐word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13‐month‐olds. German 12‐month‐olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9‐month‐olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infants’ word recognition of novel exemplars.
  • Weber, A., & Poellmann, K. (2010). Identifying foreign speakers with an unfamiliar accent or in an unfamiliar language. In New Sounds 2010: Sixth International Symposium on the Acquisition of Second Language Speech (pp. 536-541). Poznan, Poland: Adam Mickiewicz University.
  • Cutler, A., Mitterer, H., Brouwer, S., & Tuinman, A. (2010). Phonological competition in casual speech. In Proceedings of DiSS-LPSS Joint Workshop 2010 (pp. 43-46).
  • Braun, B., & Tagliapietra, L. (2010). The role of contrastive intonation contours in the retrieval of contextual alternatives. In D. G. Watson, M. Wagner, & E. Gibson (Eds.), Experimental and theoretical advances in prosody (pp. 1024-1043). Hove: Psychology Press.

    Abstract

    Sentences with a contrastive intonation contour are usually produced when the speaker entertains alternatives to the accented words. However, such contrastive sentences are frequently produced without making the alternatives explicit for the listener. In two cross-modal associative priming experiments we tested in Dutch whether such contextual alternatives become available to listeners upon hearing a sentence with a contrastive intonation contour compared with a sentence with a non-contrastive one. The first experiment tested the recognition of contrastive associates (contextual alternatives to the sentence-final primes), the second one the recognition of non-contrastive associates (generic associates which are not alternatives). Results showed that contrastive associates were facilitated when the primes occurred in sentences with a contrastive intonation contour but not in sentences with a non-contrastive intonation. Non-contrastive associates were weakly facilitated independent of intonation. Possibly, contrastive contours trigger an accommodation mechanism by which listeners retrieve the contrast available for the speaker.
  • Tucker, B. V., & Warner, N. (2010). What it means to be phonetic or phonological: The case of Romanian devoiced nasals. Phonology, 27, 289-324. doi:10.1017/S0952675710000138.

    Abstract

    phonological patterns and detailed phonetic patterns can combine to produce unusual acoustic results, but criteria for what aspects of a pattern are phonetic and what aspects are phonological are often disputed. Early literature on Romanian makes mention of nasal devoicing in word-final clusters (e.g. in /basm/ 'fairy-tale'). Using acoustic, aerodynamic and ultrasound data, the current work investigates how syllable structure, prosodic boundaries, phonetic paradigm uniformity and assimilation influence Romanian nasal devoicing. It provides instrumental phonetic documentation of devoiced nasals, a phenomenon that has not been widely studied experimentally, in a phonetically underdocumented language. We argue that sound patterns should not be separated into phonetics and phonology as two distinct systems, but neither should they all be grouped together as a single, undifferentiated system. Instead, we argue for viewing the distinction between phonetics and phonology as a largely continuous multidimensional space, within which sound patterns, including Romanian nasal devoicing, fall.
  • Seuren, P. A. M. (2010). Presupposition. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 589-596). Amsterdam: Elsevier.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Cutler, A. (2010). Speech segmentation and its payoffs [Colloquium]. Talk presented at The Australian National University. Canberra. 2010-07-23.

    Abstract

    Speech is a continuous stream. Listeners can only make sense of speech by identifying the components that comprise it - words. Segmenting speech into words is an operation which has to be learned very early, since it is how infants compile even their initial vocabulary. Evidence from new behavioural and electrophysiological studies of infant speech perception illustrates this learning process. Infants’ relative success at achieving speech segmentation in fact turns out to be a direct predictor of language skills during later development. Adult listeners segment speech so efficiently, however, that they are virtually never aware of the operation of segmentation. In part they achieve this level of efficiency by exploiting accrued knowledge of relevant structure in the native language. Amassing this language-specific knowledge also starts in infancy. However, some relevant features call on more advanced levels of language processing ability; the continuous refinement of segmentation efficiency is apparent in that (as revealed by adult listening studies across a dozen or so languages) these structural features are exploited for segmentation too, even if applying them means overturning constraints used, perhaps universally, by infants.
  • McQueen, J. M., & Cutler, A. (2010). Cognitive processes in speech perception. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 489-520). Oxford: Blackwell.
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. Talk presented at 12th Conference on Laboratory Phonology. University of New Mexico in Albuquerque, NM. 2010-07-08 - 2010-07-10.
  • Braun, B., & Chen, A. (2010). Intonation of 'now' in resolving scope ambiguity in English and Dutch. Journal of Phonetics, 38, 431-444. doi:10.1016/j.wocn.2010.04.002.

    Abstract

    The adverb now in English (nu in Dutch) can draw listeners’ attention to an upcoming contrast (e.g., ‘Put X in Y. Now put X in Z’). In Dutch, but not English, the position of this sequential adverb may disambiguate which constituent is contrasted. We investigated whether and how the intonational realization of now/nu is varied to signal different scopes and whether it interacts with word order. Three contrast conditions (contrast in object, location, or both) were produced by eight Dutch and eight English speakers. Results showed no consistent use of word order for scope disambiguation in Dutch. Importantly, independent of language, an unaccented now/nu signaled a contrasting object while an accented now/nu signaled a contrast in the location. Since these intonational patterns were independent of word order, we interpreted the results in the framework of grammatical saliency: now/nu appears to be unmarked when the contrast lies in a salient constituent (the object) but marked with a prominent rise when a less salient constituent is contrasted (the location).

    Files private

    Request files
  • Sjerps, M. J., & Smiljanic, R. (2010). The influence of language background on the relative perception of vowels. Poster presented at the 160th Meeting of the Acoustical Society of America, Cancun, Mexico.
  • Reinisch, E., Jesse, A., & Nygaard, L. C. (2010). Tone of voice helps learning the meaning of novel adjectives. Poster presented at The 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.

    Abstract

    To understand spoken words listeners have to cope with seemingly meaningless variability in the speech signal. Speakers vary, for example, their tone of voice (ToV) by changing speaking rate, pitch, vocal effort, and loudness. This variation is independent of "linguistic prosody" such as sentence intonation or speech rhythm. The variation due to ToV, however, is not random. Speakers use, for example, higher pitch when referring to small objects than when referring to large objects and importantly, adult listeners are able to use these non-lexical ToV cues to distinguish between the meanings of antonym pairs (e.g., big-small; Nygaard, Herold, & Namy, 2009). In the present study, we asked whether listeners infer the meaning of novel adjectives from ToV and subsequently interpret these adjectives according to the learned meaning even in the absence of ToV. Moreover, if listeners actually acquire these adjectival meanings, then they should generalize these word meanings to novel referents. ToV would thus be a semantic cue to lexical acquisition. This hypothesis was tested in an exposure-test paradigm with adult listeners. In the experiment listeners' eye movements to picture pairs were monitored. The picture pairs represented the endpoints of the adjectival dimensions big-small, hot-cold, and strong-weak (e.g., an elephant and an ant represented big-small). Four picture pairs per category were used. While viewing the pictures participants listened to lexically unconstraining sentences containing novel adjectives, for example, "Can you find the foppick one?" During exposure, the sentences were spoken in infant-directed speech with the intended adjectival meaning expressed by ToV. Word-meaning pairings were counterbalanced across participants. Each word was repeated eight times. Listeners had no explicit task. To guide listeners' attention to the relation between the words and pictures, three sets of filler trials were included that contained real English adjectives (e.g., full-empty). In the subsequent test phase participants heard the novel adjectives in neutral adult-directed ToV. Test sentences were recorded before the speaker was informed about intended word meanings. Participants had to choose which of two pictures on the screen the speaker referred to. Picture pairs that were presented during the exposure phase and four new picture pairs per category that varied along the critical dimensions were tested. During exposure listeners did not spontaneously direct their gaze to the intended referent at the first presentation. But as indicated by listener's fixation behavior, they quickly learned the relationship between ToV and word meaning over only two exposures. Importantly, during test participants consistently identified the intended referent object even in the absence of informative ToV. Learning was found for all three tested categories and did not depend on whether the picture pairs had been presented during exposure. Listeners thus use ToV not only to distinguish between antonym pairs but they are able to extract word meaning from ToV and assign this meaning to novel words. The newly learned word meanings can then be generalized to novel referents even in the absence of ToV cues. These findings suggest that ToV can be used as a semantic cue to lexical acquisition. References Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009) The semantics of prosody: Acoustic and perceptual evidence of prosodic correlates to word meaning. Cognitive Science, 33. 127-146.
  • Mitterer, H., McQueen, J. M., Bosker, H. R., & Poellmann, K. (2010). Adapting to phonological reduction: Tracking how learning from talker-specific episodes helps listeners recognize reductions. Talk presented at the 5th annual meeting of the Schwerpunktprogramm (SPP) 1234/2: Phonological and phonetic competence: between grammar, signal processing, and neural activity. München, Germany.
  • Cutler, A., Cooke, M., & Lecumberri, M. L. G. (2010). Preface. Speech Communication, 52, 863. doi:10.1016/j.specom.2010.11.003.

    Abstract

    Adverse listening conditions always make the perception of speech harder, but their deleterious effect is far greater if the speech we are trying to understand is in a non-native language. An imperfect signal can be coped with by recourse to the extensive knowledge one has of a native language, and imperfect knowledge of a non-native language can still support useful communication when speech signals are high-quality. But the combination of imperfect signal and imperfect knowledge leads rapidly to communication breakdown. This phenomenon is undoubtedly well known to every reader of Speech Communication from personal experience. Many readers will also have a professional interest in explaining, or remedying, the problems it produces. The journal’s readership being a decidedly interdisciplinary one, this interest will involve quite varied scientific approaches, including (but not limited to) modelling the interaction of first and second language vocabularies and phonemic repertoires, developing targeted listening training for language learners, and redesigning the acoustics of classrooms and conference halls. In other words, the phenomenon that this special issue deals with is a well-known one, that raises important scientific and practical questions across a range of speech communication disciplines, and Speech Communication is arguably the ideal vehicle for presentation of such a breadth of approaches in a single volume. The call for papers for this issue elicited a large number of submissions from across the full range of the journal’s interdisciplinary scope, requiring the guest editors to apply very strict criteria to the final selection. Perhaps unique in the history of treatments of this topic is the combination represented by the guest editors for this issue: a phonetician whose primary research interest is in second-language speech (MLGL), an engineer whose primary research field is the acoustics of masking in speech processing (MC), and a psychologist whose primary research topic is the recognition of spoken words (AC). In the opening article of the issue, these three authors together review the existing literature on listening to second-language speech under adverse conditions, bringing together these differing perspectives for the first time in a single contribution. The introductory review is followed by 13 new experimental reports of phonetic, acoustic and psychological studies of the topic. The guest editors thank Speech Communication editor Marc Swerts and the journal’s team at Elsevier, as well as all the reviewers who devoted time and expert efforts to perfecting the contributions to this issue.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability and later language development: Insight from ERP's. Talk presented at Child Language Seminar 2010. London. 2010-06-24 - 2010-06-26.
  • Broersma, M. (2010). Perception of final fricative voicing: Native and nonnative listeners’ use of vowel duration. Journal of the Acoustical Society of America, 127, 1636-1644. doi:10.1121/1.3292996.
  • Seuren, P. A. M. (2010). Aristotle and linguistics. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 25-27). Amsterdam: Elsevier.

    Abstract

    Aristotle's importance in the professional study of language consists first of all in the fact that he demythologized language and made it an object of rational investigation. In the context of his theory of truth as correspondence, he also provided the first semantic analysis of propositions in that he distinguished two main constituents, the predicate, which expresses a property, and the remainder of the proposition, referring to a substance to which the property is assigned. That assignment is either true or false. Later, the ‘remainder’ was called subject term, and the Aristotelian predicate was identified with the verb in the sentence. The Aristotelian predicate, however, is more like what is now called the ‘comment,’ whereas his remainder corresponds to the topic. Aristotle, furthermore, defined nouns and verbs as word classes. In addition, he introduced the term ‘case’ for paradigmatic morphological variation.
  • Cutler, A. (2010). How the native language shapes listening to speech. LOT Winter School 2010, Amsterdam, Free University (VU). Amsterdam, the Netherlands, 2010-01-18 - 2010-01-22.

Share this page