Publications

Displaying 201 - 300 of 1350
  • Cohen, E. (2010). An author meets her critics. Around "The mind possessed": The cognition of spirit possession in an Afro-Brazilian religious tradition" by Emma Cohen [Response to comments by Diana Espirito Santo, Arnaud Halloy, and Pierre Lienard]. Religion and Society: Advances in Research, 1(1), 164-176. doi:10.3167/arrs.2010.010112.
  • Cohen, E. (2010). Anthropology of knowledge. Journal of the Royal Anthropological Institute, 16(S1), S193-S202. doi:10.1111/j.1467-9655.2010.01617.x.

    Abstract

    Explanatory accounts of the emergence, spread, storage, persistence, and transformation of knowledge face numerous theoretical and methodological challenges. This paper argues that although anthropologists are uniquely positioned to address some of these challenges, joint engagement with relevant research in neighbouring disciplines holds considerable promise for advancement in the area. Researchers across the human and social sciences are increasingly recognizing the importance of conjointly operative and mutually contingent bodily, cognitive, neural, and social mechanisms informing the generation and communication of knowledge. Selected cognitive scientific work, in particular, is reviewed here and used to illustrate how anthropology may potentially richly contribute not only to descriptive and interpretive endeavours, but to the development and substantiation of explanatory accounts also. Résumé Les comptes-rendus portant sur l'émergence, la diffusion, la conservation, la persistance et la transformation des connaissances se heurtent à de nombreuses difficultés théoriques et méthodologiques. Bien que les anthropologues soient particulièrement bien placés pour affronter ces défis, des progrès considérables pourraient être réalisés en la matière dans le cadre d'une approche conjointe avec des disciplines voisines menant des recherches connexes. Les adeptes du décloisonnement des sciences humaines et sociales reconnaissent de plus en plus l'importance des interactions et interdépendances entre mécanismes physiques, cognitifs, neurologiques et sociaux dans la production et la communication des connaissances. Des travaux scientifiques choisis, en matière de cognition en particulier, sont examinés et utilisés pour illustrer la manière dont l'anthropologie pourrait apporter une riche contribution non seulement aux tâches descriptives et interprétatives, mais aussi à l'élaboration et la mise à l'épreuve de comptes-rendus explicatifs.
  • Cohen, E. (2012). [Review of the book Searching for Africa in Brazil: Power and Tradition in Candomblé by Stefania Capone]. Critique of Anthropology, 32, 217-218. doi:10.1177/0308275X12439961.
  • Cohen, E. (2010). [Review of the book The accidental mind: How brain evolution has given us love, memory, dreams, and god, by David J. Linden]. Journal for the Study of Religion, Nature & Culture, 4(3), 235-238. doi:10.1558/jsrnc.v4i3.239.
  • Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.

    Abstract

    Recent game-theoretic simulation and analytical models have demonstrated that cooperative strategies mediated by indicators of cooperative potential, or “tags,” can invade, spread, and resist invasion by noncooperators across a range of population-structure and cost-benefit scenarios. The plausibility of these models is potentially relevant for human evolutionary accounts insofar as humans possess some phenotypic trait that could serve as a reliable tag. Linguistic markers, such as accent and dialect, have frequently been either cursorily defended or promptly dismissed as satisfying the criteria of a reliable and evolutionarily viable tag. This paper integrates evidence from a range of disciplines to develop and assess the claim that speech accent mediated the evolution of tag-based cooperation in humans. Existing evidence warrants the preliminary conclusion that accent markers meet the demands of an evolutionarily viable tag and potentially afforded a cost-effective solution to the challenges of maintaining viable cooperative relationships in diffuse, regional social networks.
  • Cohen, E., Ejsmond-Frey, R., Knight, N., & Dunbar, R. (2010). Rowers’ high: Behavioural synchrony is correlated with elevated pain thresholds. Biology Letters, 6, 106-108. doi:10.1098/rsbl.2009.0670.

    Abstract

    Physical exercise is known to stimulate the release of endorphins, creating a mild sense of euphoria that has rewarding properties. Using pain tolerance (a conventional non-invasive
    assay for endorphin release), we show that synchronized training in a college rowing crew creates a heightened endorphin surge compared
    with a similar training regime carried out alone. This heightened effect from synchronized activity may explain the sense of euphoria experienced
    during other social activities (such as
    laughter, music-making and dancing) that are involved in social bonding in humans and possibly other vertebrates
  • Cohen, E. (2010). Where humans and spirits meet: The politics of rituals and identified spirits in Zanzibar by Kjersti Larsen [Book review]. American Ethnologist, 37, 386 -387. doi:10.1111/j.1548-1425.2010.01262_6.x.
  • Collins, J. (2012). The evolution of the Greenbergian word order correlations. In T. C. Scott-Phillips, M. Tamariz, E. A. Cartmill, & J. R. Hurford (Eds.), The evolution of language. Proceedings of the 9th International Conference (EVOLANG9) (pp. 72-79). Singapore: World Scientific.
  • Collins, J. (2024). Linguistic areas and prehistoric migrations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Colzato, L. S., Zech, H., Hommel, B., Verdonschot, R. G., Van den Wildenberg, W. P. M., & Hsieh, S. (2012). Loving-kindness brings loving-kindness: The impact of Buddhism on cognitive self-other integration. Psychonomic Bulletin & Review, 19(3), 541-545. doi:10.3758/s13423-012-0241-y.

    Abstract

    Common wisdom has it that Buddhism enhances compassion and self-other integration. We put this assumption to empirical test by comparing practicing Taiwanese Buddhists with well-matched atheists. Buddhists showed more evidence of self-other integration in the social Simon task, which assesses the degree to which people co-represent the actions of a coactor. This suggests that self-other integration and task co-representation vary as a function of religious practice.
  • Connell, L., Cai, Z. G., & Holler, J. (2012). Do you see what I'm singing? Visuospatial movement biases pitch perception. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 252-257). Austin, TX: Cognitive Science Society.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Cooke, M., García Lecumberri, M. L., Scharenborg, O., & Van Dommelen, W. A. (2010). Language-independent processing in speech perception: Identification of English intervocalic consonants by speakers of eight European languages. Speech Communication, 52, 954-967. doi:10.1016/j.specom.2010.04.004.

    Abstract

    Processing speech in a non-native language requires listeners to cope with influences from their first language and to overcome the effects of limited exposure and experience. These factors may be particularly important when listening in adverse conditions. However,native listeners also suffer in noise, and the intelligibility of speech in noise clearly depends on factors which are independent of a listener’s first language. The current study explored the issue of language-independence by comparing the responses of eight listener groups differing in native language when confronted with the task of identifying English intervocalic consonants in three masker backgrounds, viz.stationary speech-shaped noise, temporally-modulated speech-shaped noise and competing English speech. The study analysed the effects of (i) noise type, (ii) speaker, (iii) vowel context, (iv) consonant, (v) phonetic feature classes, (vi) stress position, (vii) gender and (viii) stimulus onset relative to noise onset. A significant degree of similarity in the response to many of these factors was evident across all eight language groups, suggesting that acoustic and auditory considerations play a large role in determining intelligibility. Language- specific influences were observed in the rankings of individual consonants and in the masking effect of competing speech relative to speech-modulated noise.
  • Coopmans, C. W., Mai, A., & Martin, A. E. (2024). “Not” in the brain and behavior. PLOS Biology, 22: e3002656. doi:10.1371/journal.pbio.3002656.
  • Cornelis, S. S., IntHout, J., Runhart, E. H., Grunewald, O., Lin, S., Corradi, Z., Khan, M., Hitti-Malin, R. J., Whelan, L., Farrar, G. J., Sharon, D., Van den Born, L. I., Arno, G., Simcoe, M., Michaelides, M., Webster, A. R., Roosing, S., Mahroo, O. A., Dhaenens, C.-M., Cremers, F. P. M. Cornelis, S. S., IntHout, J., Runhart, E. H., Grunewald, O., Lin, S., Corradi, Z., Khan, M., Hitti-Malin, R. J., Whelan, L., Farrar, G. J., Sharon, D., Van den Born, L. I., Arno, G., Simcoe, M., Michaelides, M., Webster, A. R., Roosing, S., Mahroo, O. A., Dhaenens, C.-M., Cremers, F. P. M., & ABCA4 Study Group (2024). Representation of women among individuals with mild variants in ABCA4-associated retinopathy: A meta-analysis. JAMA Ophthalmology, 142(5), 463-471. doi:10.1001/jamaophthalmol.2024.0660.

    Abstract

    Importance
    Previous studies indicated that female sex might be a modifier in Stargardt disease, which is an ABCA4-associated retinopathy.

    Objective
    To investigate whether women are overrepresented among individuals with ABCA4-associated retinopathy who are carrying at least 1 mild allele or carrying nonmild alleles.

    Data Sources
    Literature data, data from 2 European centers, and a new study. Data from a Radboudumc database and from the Rotterdam Eye Hospital were used for exploratory hypothesis testing.

    Study Selection
    Studies investigating the sex ratio in individuals with ABCA4-AR and data from centers that collected ABCA4 variant and sex data. The literature search was performed on February 1, 2023; data from the centers were from before 2023.

    Data Extraction and Synthesis
    Random-effects meta-analyses were conducted to test whether the proportions of women among individuals with ABCA4-associated retinopathy with mild and nonmild variants differed from 0.5, including subgroup analyses for mild alleles. Sensitivity analyses were performed excluding data with possibly incomplete variant identification. χ2 Tests were conducted to compare the proportions of women in adult-onset autosomal non–ABCA4-associated retinopathy and adult-onset ABCA4-associated retinopathy and to investigate if women with suspected ABCA4-associated retinopathy are more likely to obtain a genetic diagnosis. Data analyses were performed from March to October 2023.

    Main Outcomes and Measures
    Proportion of women per ABCA4-associated retinopathy group. The exploratory testing included sex ratio comparisons for individuals with ABCA4-associated retinopathy vs those with other autosomal retinopathies and for individuals with ABCA4-associated retinopathy who underwent genetic testing vs those who did not.

    Results
    Women were significantly overrepresented in the mild variant group (proportion, 0.59; 95% CI, 0.56-0.62; P < .001) but not in the nonmild variant group (proportion, 0.50; 95% CI, 0.46-0.54; P = .89). Sensitivity analyses confirmed these results. Subgroup analyses on mild variants showed differences in the proportions of women. Furthermore, in the Radboudumc database, the proportion of adult women among individuals with ABCA4-associated retinopathy (652/1154 = 0.56) was 0.10 (95% CI, 0.05-0.15) higher than among individuals with other retinopathies (280/602 = 0.47).

    Conclusions and Relevance
    This meta-analysis supports the likelihood that sex is a modifier in developing ABCA4-associated retinopathy for individuals with a mild ABCA4 allele. This finding may be relevant for prognosis predictions and recurrence risks for individuals with ABCA4-associated retinopathy. Future studies should further investigate whether the overrepresentation of women is caused by differences in the disease mechanism, by differences in health care–seeking behavior, or by health care discrimination between women and men with ABCA4-AR.
  • Corps, R. E., & Pickering, M. (2024). Response planning during question-answering: Does deciding what to say involve deciding how to say it? Psychonomic Bulletin & Review, 31, 839-848. doi:10.3758/s13423-023-02382-3.

    Abstract

    To answer a question, speakers must determine their response and formulate it in words. But do they decide on a response before formulation, or do they formulate different potential answers before selecting one? We addressed this issue in a verbal question-answering experiment. Participants answered questions more quickly when they had one potential answer (e.g., Which tourist attraction in Paris is very tall?) than when they had multiple potential answers (e.g., What is the name of a Shakespeare play?). Participants also answered more quickly when the set of potential answers were on average short rather than long, regardless of whether there was only one or multiple potential answers. Thus, participants were not affected by the linguistic complexity of unselected but plausible answers. These findings suggest that participants select a single answer before formulation.
  • Cox, S., Rösler, D., & Skiba, R. (1989). A tailor-made database for language teaching material. Literary & Linguistic Computing, 4(4), 260-264.
  • Crasborn, O., & Windhouwer, M. (2012). ISOcat data categories for signed language resources. In E. Efthimiou, G. Kouroupetroglou, & S.-E. Fotinea (Eds.), Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, revised selected papers (pp. 118-128). Heidelberg: Springer.

    Abstract

    As the creation of signed language resources is gaining speed world-wide, the need for standards in this field becomes more acute. This paper discusses the state of the field of signed language resources, their metadata descriptions, and annotations that are typically made. It then describes the role that ISOcat may play in this process and how it can stimulate standardisation without imposing standards. Finally, it makes some initial proposals for the thematic domain ‘sign language’ that was introduced in 2011.
  • Cristia, A., & Peperkamp, S. (2012). Generalizing without encoding specifics: Infants infer phonotactic patterns on sound classes. In A. K. Biller, E. Y. Chung, & A. E. Kimball (Eds.), Proceedings of the 36th Annual Boston University Conference on Language Development (BUCLD 36) (pp. 126-138). Somerville, Mass.: Cascadilla Press.

    Abstract

    publication expected April 2012
  • Cristia, A., Seidl, A., & Onishi, K. H. (2010). Indices acoustiques de phonémicité et d'allophonie dans la parole adressée aux enfants. Actes des XXVIIIèmes Journées d’Étude sur la Parole (JEP), 28, 277-280.
  • Cristia, A., Seidl, A., Vaughn, C., Schmale, R., Bradlow, A., & Floccia, C. (2012). Linguistic processing of accented speech across the lifespan. Frontiers in Psychology, 3, 479. doi:10.3389/fpsyg.2012.00479.

    Abstract

    In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work.
  • Cristia, A. (2010). Phonetic enhancement of sibilants in infant-directed speech. The Journal of the Acoustical Society of America, 128, 424-434. doi:10.1121/1.3436529.

    Abstract

    The hypothesis that vocalic categories are enhanced in infant-directed speech (IDS) has received a great deal of attention and support. In contrast, work focusing on the acoustic implementation of consonantal categories has been scarce, and positive, negative, and null results have been reported. However, interpreting this mixed evidence is complicated by the facts that the definition of phonetic enhancement varies across articles, that small and heterogeneous groups have been studied across experiments, and further that the categories chosen are likely affected by other characteristics of IDS. Here, an analysis of the English sibilants /s/ and /ʃ/ in a large corpus of caregivers’ speech to another adult and to their infant suggests that consonantal categories are indeed enhanced, even after controlling for typical IDS prosodic characteristics.
  • Cronin, K. A. (2012). Cognitive aspects of prosocial behavior in nonhuman primates. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning. Part 3 (2nd ed., pp. 581-583). Berlin: Springer.

    Abstract

    Definition Prosocial behavior is any behavior performed by one individual that results in a benefit for another individual. Prosocial motivations, prosocial preferences, or other-regarding preferences refer to the psychological predisposition to behave in the best interest of another individual. A behavior need not be costly to the actor to be considered prosocial, thus the concept is distinct from altruistic behavior which requires that the actor incurs some cost when providing a benefit to another.
  • Cronin, K. A., Schroeder, K. K. E., & Snowdon, C. T. (2010). Prosocial behaviour emerges independent of reciprocity in cottontop tamarins. Proceedings of the Royal Society of London Series B-Biological Sciences, 277, 3845-3851. doi:10.1098/rspb.2010.0879.

    Abstract

    The cooperative breeding hypothesis posits that cooperatively breeding species are motivated to act prosocially, that is, to behave in ways that provide benefits to others, and that cooperative breeding has played a central role in the evolution of human prosociality. However, investigations of prosocial behaviour in cooperative breeders have produced varying results and the mechanisms contributing to this variation are unknown. We investigated whether reciprocity would facilitate prosocial behaviour among cottontop tamarins, a cooperatively breeding primate species likely to engage in reciprocal altruism, by comparing the number of food rewards transferred to partners who had either immediately previously provided or denied rewards to the subject. Subjects were also tested in a non-social control condition. Overall, results indicated that reciprocity increased food transfers. However, temporal analyses revealed that when the tamarins' behaviour was evaluated in relation to the non-social control, results were best explained by (i) an initial depression in the transfer of rewards to partners who recently denied rewards, and (ii) a prosocial effect that emerged late in sessions independent of reciprocity. These results support the cooperative breeding hypothesis, but suggest a minimal role for positive reciprocity, and emphasize the importance of investigating proximate temporal mechanisms underlying prosocial behaviour.
  • Cronin, K. A. (2012). Prosocial behaviour in animals: The influence of social relationships, communication and rewards. Animal Behaviour, 84, 1085-1093. doi:10.1016/j.anbehav.2012.08.009.

    Abstract

    Researchers have struggled to obtain a clear account of the evolution of prosocial behaviour despite a great deal of recent effort. The aim of this review is to take a brief step back from addressing the question of evolutionary origins of prosocial behaviour in order to identify contextual factors that are contributing to variation in the expression of prosocial behaviour and hindering progress towards identifying phylogenetic patterns. Most available data come from the Primate Order, and the choice of contextual factors to consider was informed by theory and practice, including the nature of the relationship between the potential donor and recipient, the communicative behaviour of the recipients, and features of the prosocial task including whether rewards are visible and whether the prosocial choice creates an inequity between actors. Conclusions are drawn about the facilitating or inhibiting impact of each of these factors on the expression of prosocial behaviour, and areas for future research are highlighted. Acknowledging the impact of these contextual features on the expression of prosocial behaviours should stimulate new research into the proximate mechanisms that drive these effects, yield experimental designs that better control for potential influences on prosocial expression, and ultimately allow progress towards reconstructing the evolutionary origins of prosocial behaviour.
  • Cronin, K. A., & Sanchez, A. (2012). Social dynamics and cooperation: The case of nonhuman primates and its implications for human behavior. Advances in complex systems, 15, 1250066. doi:10.1142/S021952591250066X.

    Abstract

    The social factors that influence cooperation have remained largely uninvestigated but have the potential to explain much of the variation in cooperative behavior observed in the natural world. We show here that certain dimensions of the social environment, namely the size of the social group, the degree of social tolerance expressed, the structure of the dominance hierarchy, and the patterns of dispersal, may influence the emergence and stability of cooperation in predictable ways. Furthermore, the social environment experienced by a species over evolutionary time will have shaped their cognition to provide certain strengths and strategies that are beneficial in their species‟ social world. These cognitive adaptations will in turn impact the likelihood of cooperating in a given social environment. Experiments with one primate species, the cottontop tamarin, illustrate how social dynamics may influence emergence and stability of cooperative behavior in this species. We then take a more general viewpoint and argue that the hypotheses presented here require further experimental work and the addition of quantitative modeling to obtain a better understanding of how social dynamics influence the emergence and stability of cooperative behavior in complex systems. We conclude by pointing out subsequent specific directions for models and experiments that will allow relevant advances in the understanding of the emergence of cooperation.
  • Cutfield, S. (2012). Demonstratives in Dalabon: A language of southwestern Arnhem Land. PhD Thesis, Monash University, Melbourne.

    Abstract

    This study is a comprehensive description of the nominal demonstratives in Dalabon, a severely endangered Gunwinyguan non-Pama-Nyungan language of southwestern Arnhem Land, northern Australia. Demonstratives are attested in the basic vocabulary of every language, yet remain heretofore underdescribed in Australian languages. Traditional definitions of demonstratives as primarily making spatial reference have recently evolved at a great pace, with close analyses of demonstratives-in-use revealing that their use in spatial reference, in narrative discourse, and in interaction is significantly more complex than previously assumed, and that definitions of demonstrative forms are best developed after consideration of their use across these contexts. The present study reinforces findings of complexity in demonstrative use, and the significance of a multidimensional characterization of demonstrative forms. This study is therefore a contribution to the description of Dalabon, to the analysis of demonstratives in Australian languages, and to the theory and typology of demonstratives cross-linguistically. In this study, I present a multi-dimensional analysis of Dalabon demonstratives, using a variety of theoretical frameworks and research tools including descriptive linguistics, lexical-functional grammar, discourse analysis, gesture studies and pragmatics. Using data from personal narratives, improvised interactions and elicitation sessions to investigate the demonstratives, this study takes into account their morphosyntactic distribution, uses in the speech situation, interactional factors, discourse phenomena, concurrent gesture, and uses in personal narratives. I conclude with a unified account of the intenstional and extensional semantics of each form surveyed. The Dalabon demonstrative paradigm divides into two types, those which are spatially-specific and those which are non-spatial. The spatially-specific demonstratives nunda ‘this (in the here-space)’ and djakih ‘that (in the there-space)’ are shown not to encode the location of the referent per se, rather its relative position to dynamic physical and social elements of the speech situation such as the speaker’s engagement area and here-space. Both forms are also used as spatial adverbs to mean ‘here’ and ‘there’ respectively, while only nunda is also used as a temporal adverb ‘now, today’. The spatially-specific demonstratives are limited to situational use in narratives. The non-spatial demonstratives kanh/kanunh ‘that (identifiable)’ and nunh ‘that (unfamiliar, contrastive)’ are used in both the speech situation and personal narratives to index referents as ‘identifiable’ or ‘unfamiliar’ respectively. Their use in the speech situation can conversationally implicate that the referent is distal. The non-spatial demonstratives display the greatest diversity of use in narratives, each specializing for certain uses, yet their wide distribution across discourse usage types can be described on account of their intensional semantics. The findings of greatest typological interest in this study are that speakers’ choice of demonstrative in the speech situation is influenced by multiple simultaneous deictic parameters (including gesture); that oppositions in the Dalabon demonstrative paradigm are not equal, nor exclusively semantic; that the form nunh ‘that (unfamiliar, contrastive)’ is used to index a referent as somewhat inaccessible or unexpected; that the ‘recognitional’ form kanh/kanunh is instead described as ‘identifiable’; and that speakers use demonstratives to index emotional deixis to a referent, or to their addressee.
  • Cutfield, S. (2012). Foreword. Australian Journal of Linguistics, 32(4), 457-458.
  • Cutfield, S. (2012). Principles of Dalabon plant and animal names and classification. In D. Bordulk, N. Dalak, M. Tukumba, L. Bennett, R. Bordro Tingey, M. Katherine, S. Cutfield, M. Pamkal, & G. Wightman (Eds.), Dalabon plants and animals: Aboriginal biocultural knowledge from Southern Arnhem Land, North Australia (pp. 11-12). Palmerston, NT, Australia: Department of Land and Resource Management, Northern Territory.
  • Cutler, A. (1989). Auditory lexical access: Where do we start? In W. Marslen-Wilson (Ed.), Lexical representation and process (pp. 342-356). Cambridge, MA: MIT Press.

    Abstract

    The lexicon, considered as a component of the process of recognizing speech, is a device that accepts a sound image as input and outputs meaning. Lexical access is the process of formulating an appropriate input and mapping it onto an entry in the lexicon's store of sound images matched with their meanings. This chapter addresses the problems of auditory lexical access from continuous speech. The central argument to be proposed is that utterance prosody plays a crucial role in the access process. Continuous listening faces problems that are not present in visual recognition (reading) or in noncontinuous recognition (understanding isolated words). Aspects of utterance prosody offer a solution to these particular problems.
  • Cutler, A. (1979). Beyond parsing and lexical look-up. In R. J. Wales, & E. C. T. Walker (Eds.), New approaches to language mechanisms: a collection of psycholinguistic studies (pp. 133-149). Amsterdam: North-Holland.
  • Cutler, A., & Fear, B. D. (1991). Categoricality in acceptability judgements for strong versus weak vowels. In J. Llisterri (Ed.), Proceedings of the ESCA Workshop on Phonetics and Phonology of Speaking Styles (pp. 18.1-18.5). Barcelona, Catalonia: Universitat Autonoma de Barcelona.

    Abstract

    A distinction between strong and weak vowels can be drawn on the basis of vowel quality, of stress, or of both factors. An experiment was conducted in which sets of contextually matched word-intial vowels ranging from clearly strong to clearly weak were cross-spliced, and the naturalness of the resulting words was rated by listeners. The ratings showed that in general cross-spliced words were only significantly less acceptable than unspliced words when schwa was not involved; this supports a categorical distinction based on vowel quality.
  • Cutler, A. (2010). Abstraction-based efficiency in the lexicon. Laboratory Phonology, 1(2), 301-318. doi:10.1515/LABPHON.2010.016.

    Abstract

    Listeners learn from their past experience of listening to spoken words, and use this learning to maximise the efficiency of future word recognition. This paper summarises evidence that the facilitatory effects of drawing on past experience are mediated by abstraction, enabling learning to be generalised across new words and new listening situations. Phoneme category retuning, which allows adaptation to speaker-specific articulatory characteristics, is generalised on the basis of relatively brief experience to words previously unheard from that speaker. Abstract knowledge of prosodic regularities is applied to recognition even of novel words for which these regularities were violated. Prosodic word-boundary regularities drive segmentation of speech into words independently of the membership of the lexical candidate set resulting from the segmentation operation. Each of these different cases illustrates how abstraction from past listening experience has contributed to the efficiency of lexical recognition.
  • Cutler, A., & Clifton, Jr., C. (1999). Comprehending spoken language: A blueprint of the listener. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 123-166). Oxford University Press.
  • Cutler, A. (1979). Contemporary reaction to Rudolf Meringer’s speech error research. Historiograpia Linguistica, 6, 57-76.
  • Cutler, A. (2012). Eentaalpsychologie is geen taalpsychologie: Part II. [Valedictory lecture Radboud University]. Nijmegen: Radboud University.

    Abstract

    Rede uitgesproken bij het afscheid als hoogleraar Vergelijkende taalpsychologie aan de Faculteit der Sociale Wetenschappen van de Radboud Universiteit Nijmegen op donderdag 20 september 2012
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. In Abstracts of Laboratory Phonology 12 (pp. 115-116).
  • Cutler, A., & Davis, C. (2012). An orthographic effect in phoneme processing, and its limitations. Frontiers in Psychology, 3, 18. doi:10.3389/fpsyg.2012.00018.

    Abstract

    To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless) than in words spelled with C (e.g., voice). No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless) versus with C (e.g., floice) failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (1999). Foreword. In Slips of the Ear: Errors in the perception of Casual Conversation (pp. xiii-xv). New York City, NY, USA: Academic Press.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A., & Norris, D. (1979). Monitoring sentence comprehension. In W. E. Cooper, & E. C. T. Walker (Eds.), Sentence processing: Psycholinguistic studies presented to Merrill Garrett (pp. 113-134). Hillsdale: Erlbaum.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2010). How abstract phonemic categories are necessary for coping with speaker-related variation. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory phonology 10 (pp. 91-111). Berlin: de Gruyter.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A. (2012). Native listening: Language experience and the recognition of spoken words. Cambridge, MA: MIT Press.

    Abstract

    Understanding speech in our native tongue seems natural and effortless; listening to speech in a nonnative language is a different experience. In this book, Anne Cutler argues that listening to speech is a process of native listening because so much of it is exquisitely tailored to the requirements of the native language. Her cross-linguistic study (drawing on experimental work in languages that range from English and Dutch to Chinese and Japanese) documents what is universal and what is language specific in the way we listen to spoken language. Cutler describes the formidable range of mental tasks we carry out, all at once, with astonishing speed and accuracy, when we listen. These include evaluating probabilities arising from the structure of the native vocabulary, tracking information to locate the boundaries between words, paying attention to the way the words are pronounced, and assessing not only the sounds of speech but prosodic information that spans sequences of sounds. She describes infant speech perception, the consequences of language-specific specialization for listening to other languages, the flexibility and adaptability of listening (to our native languages), and how language-specificity and universality fit together in our language processing system. Drawing on her four decades of work as a psycholinguist, Cutler documents the recent growth in our knowledge about how spoken-word recognition works and the role of language structure in this process. Her book is a significant contribution to a vibrant and rapidly developing field.
  • Cutler, A. (2012). Native listening: The flexibility dimension. Dutch Journal of Applied Linguistics, 1(2), 169-187.

    Abstract

    The way we listen to spoken language is tailored to the specific benefit of native-language speech input. Listening to speech in non-native languages can be significantly hindered by this native bias. Is it possible to determine the degree to which a listener is listening in a native-like manner? Promising indications of how this question may be tackled are provided by new research findings concerning the great flexibility that characterises listening to the L1, in online adjustment of phonetic category boundaries for adaptation across talkers, and in modulation of lexical dynamics for adjustment across listening conditions. This flexibility pays off in many dimensions, including listening in noise, adaptation across dialects, and identification of voices. These findings further illuminate the robustness and flexibility of native listening, and potentially point to ways in which we might begin to assess degrees of ‘native-likeness’ in this skill.
  • Cutler, A., & Butterfield, S. (1989). Natural speech cues to word segmentation under difficult listening conditions. In J. Tubach, & J. Mariani (Eds.), Proceedings of Eurospeech 89: European Conference on Speech Communication and Technology: Vol. 2 (pp. 372-375). Edinburgh: CEP Consultants.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In three experiments, we examined how word boundaries are produced in deliberately clear speech. We found that speakers do indeed attempt to mark word boundaries; moreover, they differentiate between word boundaries in a way which suggests they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., Mitterer, H., Brouwer, S., & Tuinman, A. (2010). Phonological competition in casual speech. In Proceedings of DiSS-LPSS Joint Workshop 2010 (pp. 43-46).
  • Cutler, A., & Chen, H.-C. (1995). Phonological similarity effects in Cantonese word recognition. In K. Elenius, & P. Branderud (Eds.), Proceedings of the Thirteenth International Congress of Phonetic Sciences: Vol. 1 (pp. 106-109). Stockholm: Stockholm University.

    Abstract

    Two lexical decision experiments in Cantonese are described in which the recognition of spoken target words as a function of phonological similarity to a preceding prime is investigated. Phonological similaritv in first syllables produced inhibition, while similarity in second syllables led to facilitation. Differences between syllables in tonal and segmental structure had generally similar effects.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A., Otake, T., & Bruggeman, L. (2012). Phonologically determined asymmetries in vocabulary structure across languages. Journal of the Acoustical Society of America, 132(2), EL155-EL160. doi:10.1121/1.4737596.

    Abstract

    Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1999). Prosodische Struktur und Worterkennung bei gesprochener Sprache. In A. D. Friedrici (Ed.), Enzyklopädie der Psychologie: Sprachrezeption (pp. 49-83). Göttingen: Hogrefe.
  • Cutler, A. (1999). Prosody and intonation, processing issues. In R. A. Wilson, & F. C. Keil (Eds.), MIT encyclopedia of the cognitive sciences (pp. 682-683). Cambridge, MA: MIT Press.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A. (1991). Prosody in situations of communication: Salience and segmentation. In Proceedings of the Twelfth International Congress of Phonetic Sciences: Vol. 1 (pp. 264-270). Aix-en-Provence: Université de Provence, Service des publications.

    Abstract

    Speakers and listeners have a shared goal: to communicate. The processes of speech perception and of speech production interact in many ways under the constraints of this communicative goal; such interaction is as characteristic of prosodic processing as of the processing of other aspects of linguistic structure. Two of the major uses of prosodic information in situations of communication are to encode salience and segmentation, and these themes unite the contributions to the symposium introduced by the present review.
  • Cutler, A., & Norris, D. (1999). Sharpening Ockham’s razor (Commentary on W.J.M. Levelt, A. Roelofs & A.S. Meyer: A theory of lexical access in speech production). Behavioral and Brain Sciences, 22, 40-41.

    Abstract

    Language production and comprehension are intimately interrelated; and models of production and comprehension should, we argue, be constrained by common architectural guidelines. Levelt et al.'s target article adopts as guiding principle Ockham's razor: the best model of production is the simplest one. We recommend adoption of the same principle in comprehension, with consequent simplification of some well-known types of models.
  • Cutler, A. (1995). Spoken word recognition and production. In J. L. Miller, & P. D. Eimas (Eds.), Speech, language and communication (pp. 97-136). New York: Academic Press.

    Abstract

    This chapter highlights that most language behavior consists of speaking and listening. The chapter also reveals differences and similarities between speaking and listening. The laboratory study of word production raises formidable problems; ensuring that a particular word is produced may subvert the spontaneous production process. Word production is investigated via slips and tip-of-the-tongue (TOT), primarily via instances of processing failure and via the technique of via the picture-naming task. The methodology of word production is explained in the chapter. The chapter also explains the phenomenon of interaction between various stages of word production and the process of speech recognition. In this context, it explores the difference between sound and meaning and examines whether or not the comparisons are appropriate between the processes of recognition and production of spoken words. It also describes the similarities and differences in the structure of the recognition and production systems. Finally, the chapter highlights the common issues in recognition and production research, which include the nuances of frequency of occurrence, morphological structure, and phonological structure.
  • Cutler, A. (1999). Spoken-word recognition. In R. A. Wilson, & F. C. Keil (Eds.), MIT encyclopedia of the cognitive sciences (pp. 796-798). Cambridge, MA: MIT Press.
  • Cutler, A. (1995). Spoken-word recognition. In G. Bloothooft, V. Hazan, D. Hubert, & J. Llisterri (Eds.), European studies in phonetics and speech communication (pp. 66-71). Utrecht: OTS.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (2010). Strategic deployment of orthographic knowledge in phoneme detection. Language and Speech, 53(3), 307 -320. doi:10.1177/0023830910371445.

    Abstract

    The phoneme detection task is widely used in spoken-word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realized. Listeners detected the target sounds [b, m, t, f, s, k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b, m, t], which have consistent word-initial spelling, than to the targets [f, s, k], which are inconsistently spelled, but only when spelling was rendered salient by the presence in the experiment of many irregularly spelled filler words. Within the inconsistent targets [f, s, k], there was no significant difference between responses to targets in words with more usual (foam, seed, cattle) versus less usual (phone, cede, kettle) spellings. Phoneme detection is thus not necessarily sensitive to orthographic effects; knowledge of spelling stored in the lexical representations of words does not automatically become available as word candidates are activated. However, salient orthographic manipulations in experimental input can induce such sensitivity. We attribute this to listeners' experience of the value of spelling in everyday situations that encourage phonemic decisions (such as learning new names)
  • Cutler, A. (1989). Straw modules [Commentary/Massaro: Speech perception]. Behavioral and Brain Sciences, 12, 760-762.
  • Cutler, A. (1984). Stress and accent in language production and understanding. In D. Gibbon, & H. Richter (Eds.), Intonation, accent and rhythm: Studies in discourse phonology (pp. 77-90). Berlin: de Gruyter.
  • Cutler, A., & Otake, T. (1999). Pitch accent in spoken-word recognition in Japanese. Journal of the Acoustical Society of America, 105, 1877-1888.

    Abstract

    Three experiments addressed the question of whether pitch-accent information may be exploited in the process of recognizing spoken words in Tokyo Japanese. In a two-choice classification task, listeners judged from which of two words, differing in accentual structure, isolated syllables had been extracted ~e.g., ka from baka HL or gaka LH!; most judgments were correct, and listeners’ decisions were correlated with the fundamental frequency characteristics of the syllables. In a gating experiment, listeners heard initial fragments of words and guessed what the words were; their guesses overwhelmingly had the same initial accent structure as the gated word even when only the beginning CV of the stimulus ~e.g., na- from nagasa HLL or nagashi LHH! was presented. In addition, listeners were more confident in guesses with the same initial accent structure as the stimulus than in guesses with different accent. In a lexical decision experiment, responses to spoken words ~e.g., ame HL! were speeded by previous presentation of the same word ~e.g., ame HL! but not by previous presentation of a word differing only in accent ~e.g., ame LH!. Together these findings provide strong evidence that accentual information constrains the activation and selection of candidates for spoken-word recognition.
  • Cutler, A., Cooke, M., & Lecumberri, M. L. G. (2010). Preface. Speech Communication, 52, 863. doi:10.1016/j.specom.2010.11.003.

    Abstract

    Adverse listening conditions always make the perception of speech harder, but their deleterious effect is far greater if the speech we are trying to understand is in a non-native language. An imperfect signal can be coped with by recourse to the extensive knowledge one has of a native language, and imperfect knowledge of a non-native language can still support useful communication when speech signals are high-quality. But the combination of imperfect signal and imperfect knowledge leads rapidly to communication breakdown. This phenomenon is undoubtedly well known to every reader of Speech Communication from personal experience. Many readers will also have a professional interest in explaining, or remedying, the problems it produces. The journal’s readership being a decidedly interdisciplinary one, this interest will involve quite varied scientific approaches, including (but not limited to) modelling the interaction of first and second language vocabularies and phonemic repertoires, developing targeted listening training for language learners, and redesigning the acoustics of classrooms and conference halls. In other words, the phenomenon that this special issue deals with is a well-known one, that raises important scientific and practical questions across a range of speech communication disciplines, and Speech Communication is arguably the ideal vehicle for presentation of such a breadth of approaches in a single volume. The call for papers for this issue elicited a large number of submissions from across the full range of the journal’s interdisciplinary scope, requiring the guest editors to apply very strict criteria to the final selection. Perhaps unique in the history of treatments of this topic is the combination represented by the guest editors for this issue: a phonetician whose primary research interest is in second-language speech (MLGL), an engineer whose primary research field is the acoustics of masking in speech processing (MC), and a psychologist whose primary research topic is the recognition of spoken words (AC). In the opening article of the issue, these three authors together review the existing literature on listening to second-language speech under adverse conditions, bringing together these differing perspectives for the first time in a single contribution. The introductory review is followed by 13 new experimental reports of phonetic, acoustic and psychological studies of the topic. The guest editors thank Speech Communication editor Marc Swerts and the journal’s team at Elsevier, as well as all the reviewers who devoted time and expert efforts to perfecting the contributions to this issue.
  • Cutler, A. (1989). The new Victorians. New Scientist, (1663), 66.
  • Cutler, A. (1995). The perception of rhythm in spoken and written language. In J. Mehler, & S. Franck (Eds.), Cognition on cognition (pp. 283-288). Cambridge, MA: MIT Press.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A., & McQueen, J. M. (1995). The recognition of lexical units in speech. In B. De Gelder, & J. Morais (Eds.), Speech and reading: A comparative approach (pp. 33-47). Hove, UK: Erlbaum.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A., Van Ooijen, B., & Norris, D. (1999). Vowels, consonants, and lexical activation. In J. Ohala, Y. Hasegawa, M. Ohala, D. Granville, & A. Bailey (Eds.), Proceedings of the Fourteenth International Congress of Phonetic Sciences: Vol. 3 (pp. 2053-2056). Berkeley: University of California.

    Abstract

    Two lexical decision studies examined the effects of single-phoneme mismatches on lexical activation in spoken-word recognition. One study was carried out in English, and involved spoken primes and visually presented lexical decision targets. The other study was carried out in Dutch, and primes and targets were both presented auditorily. Facilitation was found only for spoken targets preceded immediately by spoken primes; no facilitation occurred when targets were presented visually, or when intervening input occurred between prime and target. The effects of vowel mismatches and consonant mismatches were equivalent.
  • Cutler, A., & Clifton Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. Bouwhuis (Eds.), Attention and Performance X: Control of Language Processes (pp. 183-196). Hillsdale, NJ: Erlbaum.
  • Cutler, A., & Clifton, Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. G. Bouwhuis (Eds.), Attention and performance X: Control of language processes (pp. 183-196). London: Erlbaum.

    Abstract

    In languages with variable stress placement, lexical stress patterns can convey information about word identity. The experiments reported here address the question of whether lexical stress information can be used in word recognition. The results allow the following conclusions: 1. Prior information as to the number of syllables and lexical stress patterns of words and nonwords does not facilitate lexical decision responses (Experiment 1). 2. The strong correspondences between grammatical category membership and stress pattern in bisyllabic English words (strong-weak stress being associated primarily with nouns, weak-strong with verbs) are not exploited in the recognition of isolated words (Experiment 2). 3. When a change in lexical stress also involves a change in vowel quality, i.e., a segmental as well as a suprasegmental alteration, effects on word recognition are greater when no segmental correlates of suprasegmental changes are involved (Experiments 2 and 3). 4. Despite the above finding, when all other factors are controlled, lexical stress information per se can indeed be shown to play a part in word-recognition process (Experiment 3).
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A. (1995). Universal and Language-Specific in the Development of Speech. Biology International, (Special Issue 33).
  • Cutler, A., & Shanley, J. (2010). Validation of a training method for L2 continuous-speech segmentation. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 1844-1847).

    Abstract

    Recognising continuous speech in a second language is often unexpectedly difficult, as the operation of segmenting speech is so attuned to native-language structure. We report the initial steps in development of a novel training method for second-language listening, focusing on speech segmentation and employing a task designed for studying this: word-spotting. Listeners detect real words in sequences consisting of a word plus a minimal context. The present validation study shows that learners from varying non-English backgrounds successfully perform a version of this task in English, and display appropriate sensitivity to structural factors that also affect segmentation by native English listeners.
  • Cysouw, M., Dediu, D., & Moran, S. (2012). Comment on “Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa”. Science, 335, 657-b. doi:10.1126/science.1208841.

    Abstract

    We show that Atkinson’s (Reports, 15 April 2011, p. 346) intriguing proposal—that global
    linguistic diversity supports a single language origin in Africa—is an artifact of using suboptimal
    data, biased methodology, and unjustified assumptions. We criticize his approach using more
    suitable data, and we additionally provide new results suggesting a more complex scenario for the
    emergence of global linguistic diversity.
  • Dagklis, A., Ponzoni, M., Govi, S., Cangi, M. G., Pasini, E., Charlotte, F., Vino, A., Doglioni, C., Davi, F., Lossos, I. S., Ntountas, I., Papadaki, T., Dolcetti, R., Ferreri, A. J. M., Stamatopoulos, K., & Ghia, P. (2012). Immunoglobulin gene repertoire in ocular adnexal lymphomas: hints on the nature of the antigenic stimulation. Leukemia, 26, 814-821. doi:10.1038/leu.2011.276.

    Abstract

    Evidence from certain geographical areas links lymphomas of the ocular adnexa marginal zone B-cell lymphomas (OAMZL) with Chlamydophila psittaci (Cp) infection, suggesting that lymphoma development is dependent upon chronic stimulation by persistent infections. Notwithstanding that, the actual immunopathogenetical mechanisms have not yet been elucidated. As in other B-cell lymphomas, insight into this issue, especially with regard to potential selecting ligands, could be provided by analysis of the immunoglobulin (IG) receptors of the malignant clones. To this end, we studied the molecular features of IGs in 44 patients with OAMZL (40% Cp-positive), identifying features suggestive of a pathogenic mechanism of autoreactivity. Herein, we show that lymphoma cells express a distinctive IG repertoire, with electropositive antigen (Ag)-binding sites, reminiscent of autoantibodies (auto-Abs) recognizing DNA. Additionally, five (11%) cases of OAMZL expressed IGs homologous with autoreactive Abs or IGs of patients with chronic lymphocytic leukemia, a disease known for the expression of autoreactive IGs by neoplastic cells. In contrast, no similarity with known anti-Chlamydophila Abs was found. Taken together, these results strongly indicate that OAMZL may originate from B cells selected for their capability to bind Ags and, in particular, auto-Ags. In OAMZL associated with Cp infection, the pathogen likely acts indirectly on the malignant B cells, promoting the development of an inflammatory milieu, where auto-Ags could be exposed and presented, driving proliferation and expansion of self-reactive B cells.
  • D'Alessandra, Y., Devanna, P., Limana, F., Straino, S., Di Carlo, A., Brambilla, P. G., Rubino, M., Carena, M. C., Spazzafumo, L., De Simone, M., Micheli, B., Biglioli, P., Achilli, F., Martelli, F., Maggiolini, S., Marenzi, G., Pompilio, G., & Capogrossi, M. C. (2010). Circulating microRNAs are new and sensitive biomarkers of myocardial infarction. European Heart Journal, 31(22), 2765-2773. doi:10.1093/eurheartj/ehq167.

    Abstract

    Aims Circulating microRNAs (miRNAs) may represent a novel class of biomarkers; therefore, we examined whether acute myocardial infarction (MI) modulates miRNAs plasma levels in humans and mice. Methods and results Healthy donors (n = 17) and patients (n = 33) with acute ST-segment elevation MI (STEMI) were evaluated. In one cohort (n = 25), the first plasma sample was obtained 517 ± 309 min after the onset of MI symptoms and after coronary reperfusion with percutaneous coronary intervention (PCI); miR-1, -133a, -133b, and -499-5p were ∼15- to 140-fold control, whereas miR-122 and -375 were ∼87–90% lower than control; 5 days later, miR-1, -133a, -133b, -499-5p, and -375 were back to baseline, whereas miR-122 remained lower than control through Day 30. In additional patients (n = 8; four treated with thrombolysis and four with PCI), miRNAs and troponin I (TnI) were quantified simultaneously starting 156 ± 72 min after the onset of symptoms and at different times thereafter. Peak miR-1, -133a, and -133b expression and TnI level occurred at a similar time, whereas miR-499-5p exhibited a slower time course. In mice, miRNAs plasma levels and TnI were measured 15 min after coronary ligation and at different times thereafter. The behaviour of miR-1, -133a, -133b, and -499-5p was similar to STEMI patients; further, reciprocal changes in the expression levels of these miRNAs were found in cardiac tissue 3–6 h after coronary ligation. In contrast, miR-122 and -375 exhibited minor changes and no significant modulation. In mice with acute hind-limb ischaemia, there was no increase in the plasma level of the above miRNAs. Conclusion Acute MI up-regulated miR-1, -133a, -133b, and -499-5p plasma levels, both in humans and mice, whereas miR-122 and -375 were lower than control only in STEMI patients. These miRNAs represent novel biomarkers of cardiac damage.
  • Dalla Bella, S., Janaqi, S., Benoit, C.-E., Farrugia, N., Bégel, V., Verga, L., Harding, E. E., & Kotz, S. A. (2024). Unravelling individual rhythmic abilities using machine learning. Scientific Reports, 14(1): 1135. doi:10.1038/s41598-024-51257-7.

    Abstract

    Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.

    Additional information

    supplementary materials
  • Danziger, E. (1995). Intransitive predicate form class survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 46-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004298.

    Abstract

    Different linguistic structures allow us to highlight distinct aspects of a situation. The aim of this survey is to investigate similarities and differences in the expression of situations or events as “stative” (maintaining a state), “inchoative” (adopting a state) and “agentive” (causing something to be in a state). The questionnaire focuses on the encoding of stative, inchoative and agentive possibilities for the translation equivalents of a set of English verbs.
  • Danziger, E. (1995). Posture verb survey. In D. Wilkins (Ed.), Extensions of space and beyond: manual for field elicitation for the 1995 field season (pp. 33-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3004235.

    Abstract

    Expressions of human activities and states are a rich area for cross-linguistic comparison. Some languages of the world treat human posture verbs (e.g., sit, lie, kneel) as a special class of predicates, with distinct formal properties. This survey examines lexical, semantic and grammatical patterns for posture verbs, with special reference to contrasts between “stative” (maintaining a posture), “inchoative” (adopting a posture), and “agentive” (causing something to adopt a posture) constructions. The enquiry is thematically linked to the more general questionnaire 'Intransitive Predicate Form Class Survey'.
  • Davidson, D. J., Hanulikova, A., & Indefrey, P. (2012). Electrophysiological correlates of morphosyntactic integration in German phrasal context. Language and Cognitive Processes, 27, 288-311. doi:10.1080/01690965.2011.616448.

    Abstract

    The morphosyntactic paradigm of an inflected word can influence isolated word recognition, but its role in multiple-word phrasal integration is less clear. We examined the electrophysiological response to adjectives in short German prepositional phrases to evaluate whether strong and weak forms of the adjective show a differential response, and whether paradigm variables are related to this response. Twenty native German speakers classified serially presented phrases as grammatically correct or not while the electroencephalogram (EEG) was recorded. A functional mixed effects model of the response to grammatically correct trials revealed a differential response to strong and weak forms of the adjectives. This response difference depended on whether the preceding preposition imposed accusative or dative case. The lexically conditioned information content of the adjectives modulated a later interval of the response. The results indicate that grammatical context modulates the response to morphosyntactic information content, and lends support to the role of paradigm structure in integrative phrasal processing.
  • Dediu, D., & Levinson, S. C. (2012). Abstract profiles of structural stability point to universal tendencies, family-specific factors, and ancient connections between languages. PLoS One, 7(9), e45198. doi:10.1371/journal.pone.0045198.

    Abstract

    Language is the best example of a cultural evolutionary system, able to retain a phylogenetic signal over many thousands of years. The temporal stability (conservatism) of basic vocabulary is relatively well understood, but the stability of the structural properties of language (phonology, morphology, syntax) is still unclear. Here we report an extensive Bayesian phylogenetic investigation of the structural stability of numerous features across many language families and we introduce a novel method for analyzing the relationships between the “stability profiles” of language families. We found that there is a strong universal component across language families, suggesting the existence of universal linguistic, cognitive and genetic constraints. Against this background, however, each language family has a distinct stability profile, and these profiles cluster by geographic area and likely deep genealogical relationships. These stability profiles reveal, for example, the ancient historical relationships between the Siberian and American language families, presumed to be separated by at least 12,000 years. Thus, such higher-level properties of language seen as an evolutionary system might allow the investigation of ancient connections between languages and shed light on the peopling of the world.

    Additional information

    journal.pone.0045198.s001.pdf
  • Dediu, D., & Dingemanse, M. (2012). More than accent: Linguistic and cultural cues in the emergence of tag-based cooperation [Commentary]. Current Anthropology, 53, 606-607. doi:10.1086/667654.

    Abstract

    Commentary on Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.
  • Dediu, D. (2010). Linguistic and genetic diversity - how and why are they related? In M. Brüne, F. Salter, & W. McGrew (Eds.), Building bridges between anthropology, medicine and human ethology: Tributes to Wulf Schiefenhövel (pp. 169-178). Bochum: Europäischer Universitätsverlag.

    Abstract

    There are some 6000 languages spoken today, classfied in approximately 90 linguistic families and many isolates, and also differing across structural, typological, dimensions. Genetically, the human species is remarkably homogeneous, with the existant genetic diversity mostly explain by intra-population differences between individuals, but the remaining inter-population differences have a non-trivial structure. Populations splits and contacts influence both languages and genes, in principle allowing them to evolve in parallel ways. The farming/language co-dispersal hypothesis is a well-known such theory, whereby farmers spreading agriculture from its places of origin also spread their genes and languages. A different type of relationship was recently proposed, involving a genetic bias which influences the structural properties of language as it is transmitted across generations. Such a bias was proposed to explain the correlations between the distribution of tone languages and two brain development-related human genes and, if confirmed by experimental studies, it could represent a new factor explaining the distrbution of diversity. The present chapter overviews these related topics in the hope that a truly interdisciplinary approach could allow a better understanding of our complex (recent as well as evolutionary) history.
  • Deegan, B., Sturt, B., Ryder, D., Butcher, M., Brumby, S., Long, G., Badngarri, N., Lannigan, J., Blythe, J., & Wightman, G. (2010). Jaru animals and plants: Aboriginal flora and fauna knowledge from the south-east Kimberley and western Top End, north Australia. Halls Creek: Kimberley Language Resource Centre; Palmerston: Department of Natural Resources, Environment, the Arts and Sport.
  • Defina, R., & Majid, A. (2012). Conceptual event units of putting and taking in two unrelated languages. In N. Miyake, D. Peebles, & R. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 1470-1475). Austin, TX: Cognitive Science Society.

    Abstract

    People automatically chunk ongoing dynamic events into discrete units. This paper investigates whether linguistic structure is a factor in this process. We test the claim that describing an event with a serial verb construction will influence a speaker’s conceptual event structure. The grammar of Avatime (a Kwa language spoken in Ghana)requires its speakers to describe some, but not all, placement events using a serial verb construction which also encodes the preceding taking event. We tested Avatime and English speakers’ recognition memory for putting and taking events. Avatime speakers were more likely to falsely recognize putting and taking events from episodes associated with takeput serial verb constructions than from episodes associated with other constructions. English speakers showed no difference in false recognitions between episode types. This demonstrates that memory for episodes is related to the type of language used; and, moreover, across languages different conceptual representations are formed for the same physical episode, paralleling habitual linguistic practices
  • Defina, R. (2010). Aspect and modality in Avatime. Master Thesis, Leiden University.
  • Demir, Ö. E., So, W.-C., Ozyurek, A., & Goldin-Meadow, S. (2012). Turkish- and English-speaking children display sensitivity to perceptual context in referring expressions they produce in speech and gesture. Language and Cognitive Processes, 27, 844 -867. doi:10.1080/01690965.2011.589273.

    Abstract

    Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.
  • DePape, A., Chen, A., Hall, G., & Trainor, L. (2012). Use of prosody and information structure in high functioning adults with Autism in relation to language ability. Frontiers in Psychology, 3, 72. doi:10.3389/fpsyg.2012.00072.

    Abstract

    Abnormal prosody is a striking feature of the speech of those with Autism Spectrum Disorder (ASD), but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without Autism Spectrum Disorder and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language function. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.
  • Diaz, B., Hintz, F., Kiebel, S. J., & von Kriegstein, K. (2012). Dysfunction of the auditory thalamus in developmental dyslexia. Proceedings of the National Academy of Sciences of the United States of America, 109(34), 13841-13846. doi:10.1073/pnas.1119828109.

    Abstract

    Developmental dyslexia, a severe and persistent reading and spelling impairment, is characterized by difficulties in processing speech sounds (i.e., phonemes). Here, we test the hypothesis that these phonological difficulties are associated with a dysfunction of the auditory sensory thalamus, the medial geniculate body (MGB). By using functional MRI, we found that, in dyslexic adults, the MGB responded abnormally when the task required attending to phonemes compared with other speech features. No other structure in the auditory pathway showed distinct functional neural patterns between the two tasks for dyslexic and control participants. Furthermore, MGB activity correlated with dyslexia diagnostic scores, indicating that the task modulation of the MGB is critical for performance in dyslexics. These results suggest that deficits in dyslexia are associated with a failure of the neural mechanism that dynamically tunes MGB according to predictions from cortical areas to optimize speech processing. This view on task-related MGB dysfunction in dyslexics has the potential to reconcile influential theories of dyslexia within a predictive coding framework of brain function.

    Files private

    Request files
  • Díaz, B., Mitterer, H., Broersma, M., & Sebastián-Gallés, N. (2012). Individual differences in late bilinguals' L2 phonological processes: From acoustic-phonetic analysis to lexical access. Learning and Individual Differences, 22, 680-689. doi:10.1016/j.lindif.2012.05.005.

    Abstract

    The extent to which the phonetic system of a second language is mastered varies across individuals. The present study evaluates the pattern of individual differences in late bilinguals across different phonological processes. Fifty-five late Dutch-English bilinguals were tested on their ability to perceive a difficult L2 speech contrast (the English /æ/-/ε/ contrast) in three different tasks: A categorization task, a word identification task and a lexical decision task. As a group, L2 listeners were less accurate than native listeners. However, at the individual level, almost half of the L2 listeners scored within the native range in the categorization task whereas a small percentage scored within the native range in the identification and lexical decision tasks. These results show that L2 listeners' performance crucially depends on the nature of the task, with higher L2 listener accuracy on an acoustic-phonetic analysis task than on tasks involving lexical processes. These findings parallel previous results for early bilinguals, where the pattern of performance was consistent with the processing hierarchy proposed by different models of speech perception. The results indicate that the analysis of patterns of non-native performance can provide important insights concerning the architecture of the speech perception system and the issue of language learnability.
  • Dietrich, R., Klein, W., & Noyau, C. (1995). The acquisition of temporality in a second language. Amsterdam: Benjamins.
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dijkstra, T., & Kempen, G. (1984). Taal in uitvoering: Inleiding tot de psycholinguistiek. Groningen: Wolters-Noordhoff.
  • Dikshit, A. P., Das, D., Samal, R. R., Parashar, K., Mishra, C., & Parashar, S. (2024). Optimization of (Ba1-xCax)(Ti0.9Sn0.1)O3 ceramics in X-band using Machine Learning. Journal of Alloys and Compounds, 982: 173797. doi:10.1016/j.jallcom.2024.173797.

    Abstract

    Developing efficient electromagnetic interference shielding materials has become significantly important in present times. This paper reports a series of (Ba1-xCax)(Ti0.9Sn0.1)O3 (BCTS) ((x =0, 0.01, 0.05, & 0.1)ceramics synthesized by conventional method which were studied for electromagnetic interference shielding (EMI) applications in X-band (8-12.4 GHz). EMI shielding properties and all S parameters (S11 & S12) of BCTS ceramic pellets were measured in the frequency range (8-12.4 GHz) using a Vector Network Analyser (VNA). The BCTS ceramic pellets for x = 0.05 showed maximum total effective shielding of 46 dB indicating good shielding behaviour for high-frequency applications. However, the development of lead-free ceramics with different concentrations usually requires iterative experiments resulting in, longer development cycles and higher costs. To address this, we used a machine learning (ML) strategy to predict the EMI shielding for different concentrations and experimentally verify the concentration predicted to give the best EMI shielding. The ML model predicted BCTS ceramics with concentration (x = 0.06, 0.07, 0.08, and 0.09) to have higher shielding values. On experimental verification, a shielding value of 58 dB was obtained for x = 0.08, which was significantly higher than what was obtained experimentally before applying the ML approach. Our results show the potential of using ML in accelerating the process of optimal material development, reducing the need for repeated experimental measures significantly.
  • Dimitrova, D. V., Stowe, L. A., Redeker, G., & Hoeks, J. C. J. (2012). Less is not more: Neural responses to missing and superfluous accents in context. Journal of Cognitive Neuroscience, 24, 2400-2418. doi:10.1162/jocn_a_00302.

    Abstract

    Prosody, particularly accent, aids comprehension by drawing attention to important elements such as the information that answers a question. A study using ERP registration investigated how the brain deals with the interpretation of prosodic prominence. Sentences were embedded in short dialogues and contained accented elements that were congruous or incongruous with respect to a preceding question. In contrast to previous studies, no explicit prosodic judgment task was added. Robust effects of accentuation were evident in the form of an “accent positivity” (200–500 msec) for accented elements irrespective of their congruity. Our results show that incongruously accented elements, that is, superfluous accents, activate a specific set of neural systems that is inactive in case of incongruously unaccented elements, that is, missing accents. Superfluous accents triggered an early positivity around 100 msec poststimulus, followed by a right-lateralized negative effect (N400). This response suggests that redundant information is identified immediately and leads to the activation of a neural system that is associated with semantic processing (N400). No such effects were found when contextually expected accents were missing. In a later time window, both missing and superfluous accents triggered a late positivity on midline electrodes, presumably related to making sense of both kinds of mismatching stimuli. These results challenge previous findings of greater processing for missing accents and suggest that the natural processing of prosody involves a set of distinct, temporally organized neural systems.
  • Dimitrova, D. V. (2012). Neural correlates of prosody and information structure. PhD Thesis, Rijksuniversiteit Groningen.

    Abstract

    The present dissertation investigates what neurocognitive processes are activated in the brain when listeners comprehend spoken language and in particular the melody and rhythm of speech, also referred to as prosody. The findings of several electrophysiological studies show that prosody influences the early and late stages of spoken language processing. When words are accented, listeners consider them important, and the brain responds to accentuation already 200 milliseconds after stimulus onset. The processing of prosodic prominence occurs whether or not a context is present and whether or not accent is congruent with context, although the responses to accentuation may be modified by either of these factors and by the focus particle only. Listeners are sensitive not only to the presence of prosodic prominence but also to the type of accents speakers use: corrective prosody activates additional interpretation mechanisms related to the construction of corrective meaning. The parallel between accents across clauses impacts the disambiguation of sentences with verb ellipsis. By interpreting prosodically parallel elements as syntactically parallel, listeners arrive at less preferred interpretations of conjoined clauses. The research indentifies early correlates of incongruous prosody in strongly predictive contexts as well as late integration processes for prosody comprehension, which are related to the processing of structural complexity in isolated and ambiguous sentences. The dissertation provides evidence that the brain is sensitive to differences in prosody even in the absence of prosodic judgment. However, by changing the task, one modulates the neural mechanisms of prosody processing.
  • Dimroth, C., Andorno, C., Benazzo, S., & Verhagen, J. (2010). Given claims about new topics: How Romance and Germanic speakers link changed and maintained information in narrative discourse. Journal of Pragmatics, 42(12), 3328-3344. doi:10.1016/j.pragma.2010.05.009.

    Abstract

    This paper deals with the anaphoric linking of information units in spoken discourse in French, Italian, Dutch and German. We distinguish the information units ‘time’, ‘entity’, and ‘predicate’ and specifically investigate how speakers mark the information structure of their utterances and enhance discourse cohesion in contexts where the predicate contains given information but there is a change in one or more of the other information units. Germanic languages differ from Romance languages in the availability of a set of assertion-related particles (e.g. doch/toch, wel; roughly meaning ‘indeed’) and the option of highlighting the assertion component of a finite verb independently of its lexical content (verum focus). Based on elicited production data from 20 native speakers per language, we show that speakers of Dutch and German relate utterances to one another by focussing on this assertion component, and propose an analysis of the additive scope particles ook/auch (also) along similar lines. Speakers of Romance languages tend to highlight change or maintenance in the other information units. Such differences in the repertoire have consequences for the selection of units that are used for anaphoric linking. We conclude that there is a Germanic and a Romance way of signalling the information flow and enhancing discourse cohesion.

Share this page