Publications

Displaying 201 - 300 of 1397
  • Cholin, J. (2004). Syllables in speech production: Effects of syllable preparation and syllable frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60589.

    Abstract

    The fluent production of speech is a very complex human skill. It requires the coordination of several articulatory subsystems. The instructions that lead articulatory movements to execution are the result of the interplay of speech production levels that operate above the articulatory network. During the process of word-form encoding, the groundwork for the articulatory programs is prepared which then serve the articulators as basic units. This thesis investigated whether or not syllables form the basis for the articulatory programs and in particular whether or not these syllable programs are stored, separate from the store of the lexical word-forms. It is assumed that syllable units are stored in a so-called 'mental syllabary'. The main goal of this thesis was to find evidence of the syllable playing a functionally important role in speech production and for the assumption that syllables are stored units. In a variant of the implicit priming paradigm, it was investigated whether information about the syllabic structure of a target word facilitates the preparation (advanced planning) of a to-be-produced utterance. These experiments yielded evidence for the functionally important role of syllables in speech production. In a subsequent row of experiments, it could be demonstrated that the production of syllables is sensitive to frequency. Syllable frequency effects provide strong evidence for the notion of a mental syllabary because only stored units are likely to exhibit frequency effects. In a last study, effects of syllable preparation and syllable frequency were investigated in a combined study to disentangle the two effects. The results of this last experiment converged with those reported for the other experiments and added further support to the claim that syllables play a core functional role in speech production and are stored in a mental syllabary.

    Additional information

    full text via Radboud Repository
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Claassen, S., D'Antoni, J., & Senft, G. (2010). Some Trobriand Islands string figures. Bulletin of the International String Figure Association, 17, 72-128.

    Abstract

    Some Trobriand Islands string figures by Stephan Claassen, Best, the Netherlands, and Joseph D'Antoni, Queens, New York, USA, in cooperation with Gunter Senft, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands (pages 72-128) - The construction and execution of fourteen string figures from the Trobriand Islands is given, along with accompanying chants (in the original, and in translation) and comparative notes. The figures were made during a 1984 string figure performance by two ladies in the village of Tauwema, on the island of Kaile’una. The performance was filmed by a team of German researchers. One of the figures appears to be not recorded before, and the construction method of another figure was hitherto unknown. Some of the other figures have their own peculiarities.
  • Clahsen, H., Eisenbeiss, S., Hadler, M., & Sonnenstuhl, I. (2001). The Mental Representation of Inflected Words: An Experimental Study of Adjectives and Verbs in German. Language, 77(3), 510-534. doi:10.1353/lan.2001.0140.

    Abstract

    The authors investigate how morphological relationships between inflected word forms are represented in the mental lexicon, focusing on paradigmatic relations between regularly inflected word forms and relationships between different stem forms of the same lexeme. We present results from a series of psycholinguistic experiments investigating German adjectives (which are inflected for case, number, and gender) and the so-called strong verbs of German, which have different stem forms when inflected for person, number, tense, or mood. Evidence from three lexical-decision experiments indicates that regular affixes are stripped off from their stems for processing purposes. It will be shown that this holds for both unmarked and marked stem forms. Another set of experiments revealed priming effects between different paradigmatically related affixes and between different stem forms of the same lexeme. We will show that associative models of inflection do not capture these findings, and we explain our results in terms of combinatorial models of inflection in which regular affixes are represented in inflectional paradigms and stem variants are represented in structured lexical entries. We will also argue that the morphosyntactic features of stems and affixes form abstract underspecified entries. The experimental results indicate that the human language processor makes use of these representations.

    Files private

    Request files
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Coco, M. I., Araujo, S., & Petersson, K. M. (2017). Disentangling stimulus plausibility and contextual congruency: Electro-physiological evidence for differential cognitive dynamics. Neuropsychologia, 96, 150-163. doi:10.1016/j.neuropsychologia.2016.12.008.

    Abstract

    Expectancy mechanisms are routinely used by the cognitive system in stimulus processing and in anticipation of appropriate responses. Electrophysiology research has documented negative shifts of brain activity when expectancies are violated within a local stimulus context (e.g., reading an implausible word in a sentence) or more globally between consecutive stimuli (e.g., a narrative of images with an incongruent end). In this EEG study, we examine the interaction between expectancies operating at the level of stimulus plausibility and at more global level of contextual congruency to provide evidence for, or against, a disassociation of the underlying processing mechanisms. We asked participants to verify the congruency of pairs of cross-modal stimuli (a sentence and a scene), which varied in plausibility. ANOVAs on ERP amplitudes in selected windows of interest show that congruency violation has longer-lasting (from 100 to 500 ms) and more widespread effects than plausibility violation (from 200 to 400 ms). We also observed critical interactions between these factors, whereby incongruent and implausible pairs elicited stronger negative shifts than their congruent counterpart, both early on (100–200 ms) and between 400–500 ms. Our results suggest that the integration mechanisms are sensitive to both global and local effects of expectancy in a modality independent manner. Overall, we provide novel insights into the interdependence of expectancy during meaning integration of cross-modal stimuli in a verification task
  • Cohen, E. (2010). An author meets her critics. Around "The mind possessed": The cognition of spirit possession in an Afro-Brazilian religious tradition" by Emma Cohen [Response to comments by Diana Espirito Santo, Arnaud Halloy, and Pierre Lienard]. Religion and Society: Advances in Research, 1(1), 164-176. doi:10.3167/arrs.2010.010112.
  • Cohen, E. (2010). Anthropology of knowledge. Journal of the Royal Anthropological Institute, 16(S1), S193-S202. doi:10.1111/j.1467-9655.2010.01617.x.

    Abstract

    Explanatory accounts of the emergence, spread, storage, persistence, and transformation of knowledge face numerous theoretical and methodological challenges. This paper argues that although anthropologists are uniquely positioned to address some of these challenges, joint engagement with relevant research in neighbouring disciplines holds considerable promise for advancement in the area. Researchers across the human and social sciences are increasingly recognizing the importance of conjointly operative and mutually contingent bodily, cognitive, neural, and social mechanisms informing the generation and communication of knowledge. Selected cognitive scientific work, in particular, is reviewed here and used to illustrate how anthropology may potentially richly contribute not only to descriptive and interpretive endeavours, but to the development and substantiation of explanatory accounts also. Résumé Les comptes-rendus portant sur l'émergence, la diffusion, la conservation, la persistance et la transformation des connaissances se heurtent à de nombreuses difficultés théoriques et méthodologiques. Bien que les anthropologues soient particulièrement bien placés pour affronter ces défis, des progrès considérables pourraient être réalisés en la matière dans le cadre d'une approche conjointe avec des disciplines voisines menant des recherches connexes. Les adeptes du décloisonnement des sciences humaines et sociales reconnaissent de plus en plus l'importance des interactions et interdépendances entre mécanismes physiques, cognitifs, neurologiques et sociaux dans la production et la communication des connaissances. Des travaux scientifiques choisis, en matière de cognition en particulier, sont examinés et utilisés pour illustrer la manière dont l'anthropologie pourrait apporter une riche contribution non seulement aux tâches descriptives et interprétatives, mais aussi à l'élaboration et la mise à l'épreuve de comptes-rendus explicatifs.
  • Cohen, E. (2010). [Review of the book The accidental mind: How brain evolution has given us love, memory, dreams, and god, by David J. Linden]. Journal for the Study of Religion, Nature & Culture, 4(3), 235-238. doi:10.1558/jsrnc.v4i3.239.
  • Cohen, E., Ejsmond-Frey, R., Knight, N., & Dunbar, R. (2010). Rowers’ high: Behavioural synchrony is correlated with elevated pain thresholds. Biology Letters, 6, 106-108. doi:10.1098/rsbl.2009.0670.

    Abstract

    Physical exercise is known to stimulate the release of endorphins, creating a mild sense of euphoria that has rewarding properties. Using pain tolerance (a conventional non-invasive
    assay for endorphin release), we show that synchronized training in a college rowing crew creates a heightened endorphin surge compared
    with a similar training regime carried out alone. This heightened effect from synchronized activity may explain the sense of euphoria experienced
    during other social activities (such as
    laughter, music-making and dancing) that are involved in social bonding in humans and possibly other vertebrates
  • Cohen, E. (2010). Where humans and spirits meet: The politics of rituals and identified spirits in Zanzibar by Kjersti Larsen [Book review]. American Ethnologist, 37, 386 -387. doi:10.1111/j.1548-1425.2010.01262_6.x.
  • Collins, J. (2017). Real and spurious correlations involving tonal languages. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 129-139). Berlin: Language Science Press.
  • Cooke, M., García Lecumberri, M. L., Scharenborg, O., & Van Dommelen, W. A. (2010). Language-independent processing in speech perception: Identification of English intervocalic consonants by speakers of eight European languages. Speech Communication, 52, 954-967. doi:10.1016/j.specom.2010.04.004.

    Abstract

    Processing speech in a non-native language requires listeners to cope with influences from their first language and to overcome the effects of limited exposure and experience. These factors may be particularly important when listening in adverse conditions. However,native listeners also suffer in noise, and the intelligibility of speech in noise clearly depends on factors which are independent of a listener’s first language. The current study explored the issue of language-independence by comparing the responses of eight listener groups differing in native language when confronted with the task of identifying English intervocalic consonants in three masker backgrounds, viz.stationary speech-shaped noise, temporally-modulated speech-shaped noise and competing English speech. The study analysed the effects of (i) noise type, (ii) speaker, (iii) vowel context, (iv) consonant, (v) phonetic feature classes, (vi) stress position, (vii) gender and (viii) stimulus onset relative to noise onset. A significant degree of similarity in the response to many of these factors was evident across all eight language groups, suggesting that acoustic and auditory considerations play a large role in determining intelligibility. Language- specific influences were observed in the rankings of individual consonants and in the masking effect of competing speech relative to speech-modulated noise.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Cortázar-Chinarro, M., Lattenkamp, E. Z., Meyer-Lucht, Y., Luquet, E., Laurila, A., & Höglund, J. (2017). Drift, selection, or migration? Processes affecting genetic differentiation and variation along a latitudinal gradient in an amphibian. BMC Evolutionary Biology, 17: 189. doi:10.1186/s12862-017-1022-z.

    Abstract

    Past events like fluctuations in population size and post-glacial colonization processes may influence the relative importance of genetic drift, migration and selection when determining the present day patterns of genetic variation. We disentangle how drift, selection and migration shape neutral and adaptive genetic variation in 12 moor frog populations along a 1700 km latitudinal gradient. We studied genetic differentiation and variation at a MHC exon II locus and a set of 18 microsatellites.
    Results

    Using outlier analyses, we identified the MHC II exon 2 (corresponding to the β-2 domain) locus and one microsatellite locus (RCO8640) to be subject to diversifying selection, while five microsatellite loci showed signals of stabilizing selection among populations. STRUCTURE and DAPC analyses on the neutral microsatellites assigned populations to a northern and a southern cluster, reflecting two different post-glacial colonization routes found in previous studies. Genetic variation overall was lower in the northern cluster. The signature of selection on MHC exon II was weaker in the northern cluster, possibly as a consequence of smaller and more fragmented populations.
    Conclusion

    Our results show that historical demographic processes combined with selection and drift have led to a complex pattern of differentiation along the gradient where some loci are more divergent among populations than predicted from drift expectations due to diversifying selection, while other loci are more uniform among populations due to stabilizing selection. Importantly, both overall and MHC genetic variation are lower at northern latitudes. Due to lower evolutionary potential, the low genetic variation in northern populations may increase the risk of extinction when confronted with emerging pathogens and climate change.
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Cristia, A., Seidl, A., & Onishi, K. H. (2010). Indices acoustiques de phonémicité et d'allophonie dans la parole adressée aux enfants. Actes des XXVIIIèmes Journées d’Étude sur la Parole (JEP), 28, 277-280.
  • Cristia, A. (2010). Phonetic enhancement of sibilants in infant-directed speech. The Journal of the Acoustical Society of America, 128, 424-434. doi:10.1121/1.3436529.

    Abstract

    The hypothesis that vocalic categories are enhanced in infant-directed speech (IDS) has received a great deal of attention and support. In contrast, work focusing on the acoustic implementation of consonantal categories has been scarce, and positive, negative, and null results have been reported. However, interpreting this mixed evidence is complicated by the facts that the definition of phonetic enhancement varies across articles, that small and heterogeneous groups have been studied across experiments, and further that the categories chosen are likely affected by other characteristics of IDS. Here, an analysis of the English sibilants /s/ and /ʃ/ in a large corpus of caregivers’ speech to another adult and to their infant suggests that consonantal categories are indeed enhanced, even after controlling for typical IDS prosodic characteristics.
  • Cronin, K. A., Schroeder, K. K. E., & Snowdon, C. T. (2010). Prosocial behaviour emerges independent of reciprocity in cottontop tamarins. Proceedings of the Royal Society of London Series B-Biological Sciences, 277, 3845-3851. doi:10.1098/rspb.2010.0879.

    Abstract

    The cooperative breeding hypothesis posits that cooperatively breeding species are motivated to act prosocially, that is, to behave in ways that provide benefits to others, and that cooperative breeding has played a central role in the evolution of human prosociality. However, investigations of prosocial behaviour in cooperative breeders have produced varying results and the mechanisms contributing to this variation are unknown. We investigated whether reciprocity would facilitate prosocial behaviour among cottontop tamarins, a cooperatively breeding primate species likely to engage in reciprocal altruism, by comparing the number of food rewards transferred to partners who had either immediately previously provided or denied rewards to the subject. Subjects were also tested in a non-social control condition. Overall, results indicated that reciprocity increased food transfers. However, temporal analyses revealed that when the tamarins' behaviour was evaluated in relation to the non-social control, results were best explained by (i) an initial depression in the transfer of rewards to partners who recently denied rewards, and (ii) a prosocial effect that emerged late in sessions independent of reciprocity. These results support the cooperative breeding hypothesis, but suggest a minimal role for positive reciprocity, and emphasize the importance of investigating proximate temporal mechanisms underlying prosocial behaviour.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A. (1979). Beyond parsing and lexical look-up. In R. J. Wales, & E. C. T. Walker (Eds.), New approaches to language mechanisms: a collection of psycholinguistic studies (pp. 133-149). Amsterdam: North-Holland.
  • Cutler, A. (2010). Abstraction-based efficiency in the lexicon. Laboratory Phonology, 1(2), 301-318. doi:10.1515/LABPHON.2010.016.

    Abstract

    Listeners learn from their past experience of listening to spoken words, and use this learning to maximise the efficiency of future word recognition. This paper summarises evidence that the facilitatory effects of drawing on past experience are mediated by abstraction, enabling learning to be generalised across new words and new listening situations. Phoneme category retuning, which allows adaptation to speaker-specific articulatory characteristics, is generalised on the basis of relatively brief experience to words previously unheard from that speaker. Abstract knowledge of prosodic regularities is applied to recognition even of novel words for which these regularities were violated. Prosodic word-boundary regularities drive segmentation of speech into words independently of the membership of the lexical candidate set resulting from the segmentation operation. Each of these different cases illustrates how abstraction from past listening experience has contributed to the efficiency of lexical recognition.
  • Cutler, A. (1979). Contemporary reaction to Rudolf Meringer’s speech error research. Historiograpia Linguistica, 6, 57-76.
  • Cutler, A. (2017). Converging evidence for abstract phonological knowledge in speech processing. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1447-1448). Austin, TX: Cognitive Science Society.

    Abstract

    The perceptual processing of speech is a constant interplay of multiple competing albeit convergent processes: acoustic input vs. higher-level representations, universal mechanisms vs. language-specific, veridical traces of speech experience vs. construction and activation of abstract representations. The present summary concerns the third of these issues. The ability to generalise across experience and to deal with resulting abstractions is the hallmark of human cognition, visible even in early infancy. In speech processing, abstract representations play a necessary role in both production and perception. New sorts of evidence are now informing our understanding of the breadth of this role.
  • Cutler, A. (2001). De baby in je hoofd: luisteren naar eigen en andermans taal [Speech at the Catholic University's 78th Dies Natalis]. Nijmegen, The Netherlands: Nijmegen University Press.
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. In Abstracts of Laboratory Phonology 12 (pp. 115-116).
  • Cutler, A. (2001). Entries on: Acquisition of language by non-human primates; bilingualism; compound (linguistic); development of language-specific phonology; gender (linguistic); grammar; infant speech perception; language; lexicon; morphology; motor theory of speech perception; perception of second languages; phoneme; phonological store; phonology; prosody; sign language; slips of the tongue; speech perception; speech production; stress (linguistic); syntax; word recognition; words. In P. Winn (Ed.), Dictionary of biological psychology. London: Routledge.
  • Ip, M. H. K., & Cutler, A. (2017). Intonation facilitates prediction of focus even in the presence of lexical tones. In Proceedings of Interspeech 2017 (pp. 1218-1222). doi:10.21437/Interspeech.2017-264.

    Abstract

    In English and Dutch, listeners entrain to prosodic contours to predict where focus will fall in an utterance. However, is this strategy universally available, even in languages with different phonological systems? In a phoneme detection experiment, we examined whether prosodic entrainment is also found in Mandarin Chinese, a tone language, where in principle the use of pitch for lexical identity may take precedence over the use of pitch cues to salience. Consistent with the results from Germanic languages, response times were facilitated when preceding intonation predicted accent on the target-bearing word. Acoustic analyses revealed greater F0 range in the preceding intonation of the predicted-accent sentences. These findings have implications for how universal and language-specific mechanisms interact in the processing of salience.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., & Norris, D. (1979). Monitoring sentence comprehension. In W. E. Cooper, & E. C. T. Walker (Eds.), Sentence processing: Psycholinguistic studies presented to Merrill Garrett (pp. 113-134). Hillsdale: Erlbaum.
  • Cutler, A., Norris, D., & McQueen, J. M. (1996). Lexical access in continuous speech: Language-specific realisations of a universal model. In T. Otake, & A. Cutler (Eds.), Phonological structure and language processing: Cross-linguistic studies (pp. 227-242). Berlin: Mouton de Gruyter.
  • Cutler, A., Eisner, F., McQueen, J. M., & Norris, D. (2010). How abstract phonemic categories are necessary for coping with speaker-related variation. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory phonology 10 (pp. 91-111). Berlin: de Gruyter.
  • Cutler, A. (2001). Listening to a second language through the ears of a first. Interpreting, 5, 1-23.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A., Mitterer, H., Brouwer, S., & Tuinman, A. (2010). Phonological competition in casual speech. In Proceedings of DiSS-LPSS Joint Workshop 2010 (pp. 43-46).
  • Cutler, A., & Otake, T. (1996). Phonological structure and its role in language processing. In T. Otake, & A. Cutler (Eds.), Phonological structure and language processing: Cross-linguistic studies (pp. 1-12). Berlin: Mouton de Gruyter.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1996). Prosody and the word boundary problem. In J. L. Morgan, & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 87-99). Mahwah, NJ: Erlbaum.
  • Cutler, A. (1996). The comparative study of spoken-language processing. In H. T. Bunnell (Ed.), Proceedings of the Fourth International Conference on Spoken Language Processing: Vol. 1 (pp. 1). New York: Institute of Electrical and Electronics Engineers.

    Abstract

    Psycholinguists are saddled with a paradox. Their aim is to construct a model of human language processing, which will hold equally well for the processing of any language, but this aim cannot be achieved just by doing experiments in any language. They have to compare processing of many languages, and actively search for effects which are specific to a single language, even though a model which is itself specific to a single language is really the last thing they want.
  • Cutler, A., Van Ooijen, B., Norris, D., & Sanchez-Casas, R. (1996). Speeded detection of vowels: A cross-linguistic study. Perception and Psychophysics, 58, 807-822. Retrieved from http://www.psychonomic.org/search/view.cgi?id=430.

    Abstract

    In four experiments, listeners’ response times to detect vowel targets in spoken input were measured. The first three experiments were conducted in English. In two, one using real words and the other, nonwords, detection accuracy was low, targets in initial syllables were detected more slowly than targets in final syllables, and both response time and missed-response rate were inversely correlated with vowel duration. In a third experiment, the speech context for some subjects included all English vowels, while for others, only five relatively distinct vowels occurred. This manipulation had essentially no effect, and the same response pattern was again observed. A fourth experiment, conducted in Spanish, replicated the results in the first three experiments, except that miss rate was here unrelated to vowel duration. We propose that listeners’ responses to vowel targets in naturally spoken input are effectively cautious, reflecting realistic appreciation of vowel variability in natural context.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (2010). Strategic deployment of orthographic knowledge in phoneme detection. Language and Speech, 53(3), 307 -320. doi:10.1177/0023830910371445.

    Abstract

    The phoneme detection task is widely used in spoken-word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realized. Listeners detected the target sounds [b, m, t, f, s, k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b, m, t], which have consistent word-initial spelling, than to the targets [f, s, k], which are inconsistently spelled, but only when spelling was rendered salient by the presence in the experiment of many irregularly spelled filler words. Within the inconsistent targets [f, s, k], there was no significant difference between responses to targets in words with more usual (foam, seed, cattle) versus less usual (phone, cede, kettle) spellings. Phoneme detection is thus not necessarily sensitive to orthographic effects; knowledge of spelling stored in the lexical representations of words does not automatically become available as word candidates are activated. However, salient orthographic manipulations in experimental input can induce such sensitivity. We attribute this to listeners' experience of the value of spelling in everyday situations that encourage phonemic decisions (such as learning new names)
  • Cutler, A., Cooke, M., & Lecumberri, M. L. G. (2010). Preface. Speech Communication, 52, 863. doi:10.1016/j.specom.2010.11.003.

    Abstract

    Adverse listening conditions always make the perception of speech harder, but their deleterious effect is far greater if the speech we are trying to understand is in a non-native language. An imperfect signal can be coped with by recourse to the extensive knowledge one has of a native language, and imperfect knowledge of a non-native language can still support useful communication when speech signals are high-quality. But the combination of imperfect signal and imperfect knowledge leads rapidly to communication breakdown. This phenomenon is undoubtedly well known to every reader of Speech Communication from personal experience. Many readers will also have a professional interest in explaining, or remedying, the problems it produces. The journal’s readership being a decidedly interdisciplinary one, this interest will involve quite varied scientific approaches, including (but not limited to) modelling the interaction of first and second language vocabularies and phonemic repertoires, developing targeted listening training for language learners, and redesigning the acoustics of classrooms and conference halls. In other words, the phenomenon that this special issue deals with is a well-known one, that raises important scientific and practical questions across a range of speech communication disciplines, and Speech Communication is arguably the ideal vehicle for presentation of such a breadth of approaches in a single volume. The call for papers for this issue elicited a large number of submissions from across the full range of the journal’s interdisciplinary scope, requiring the guest editors to apply very strict criteria to the final selection. Perhaps unique in the history of treatments of this topic is the combination represented by the guest editors for this issue: a phonetician whose primary research interest is in second-language speech (MLGL), an engineer whose primary research field is the acoustics of masking in speech processing (MC), and a psychologist whose primary research topic is the recognition of spoken words (AC). In the opening article of the issue, these three authors together review the existing literature on listening to second-language speech under adverse conditions, bringing together these differing perspectives for the first time in a single contribution. The introductory review is followed by 13 new experimental reports of phonetic, acoustic and psychological studies of the topic. The guest editors thank Speech Communication editor Marc Swerts and the journal’s team at Elsevier, as well as all the reviewers who devoted time and expert efforts to perfecting the contributions to this issue.
  • Cutler, A., & Otake, T. (1996). The processing of word prosody in Japanese. In P. McCormack, & A. Russell (Eds.), Proceedings of the 6th Australian International Conference on Speech Science and Technology (pp. 599-604). Canberra: Australian Speech Science and Technology Association.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., McQueen, J. M., Norris, D., & Somejuan, A. (2001). The roll of the silly ball. In E. Dupoux (Ed.), Language, brain and cognitive development: Essays in honor of Jacques Mehler (pp. 181-194). Cambridge, MA: MIT Press.
  • Cutler, A., & Van Donselaar, W. (2001). Voornaam is not a homophone: Lexical prosody and lexical access in Dutch. Language and Speech, 44, 171-195. doi:10.1177/00238309010440020301.

    Abstract

    Four experiments examined Dutch listeners’ use of suprasegmental information in spoken-word recognition. Isolated syllables excised from minimal stress pairs such as VOORnaam/voorNAAM could be reliably assigned to their source words. In lexical decision, no priming was observed from one member of minimal stress pairs to the other, suggesting that the pairs’ segmental ambiguity was removed by suprasegmental information.Words embedded in nonsense strings were harder to detect if the nonsense string itself formed the beginning of a competing word, but a suprasegmental mismatch to the competing word significantly reduced this inhibition. The same nonsense strings facilitated recognition of the longer words of which they constituted the beginning, butagain the facilitation was significantly reduced by suprasegmental mismatch. Together these results indicate that Dutch listeners effectively exploit suprasegmental cues in recognizing spoken words. Nonetheless, suprasegmental mismatch appears to be somewhat less effective in constraining activation than segmental mismatch.
  • Cutler, A., & Shanley, J. (2010). Validation of a training method for L2 continuous-speech segmentation. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 1844-1847).

    Abstract

    Recognising continuous speech in a second language is often unexpectedly difficult, as the operation of segmenting speech is so attuned to native-language structure. We report the initial steps in development of a novel training method for second-language listening, focusing on speech segmentation and employing a task designed for studying this: word-spotting. Listeners detect real words in sequences consisting of a word plus a minimal context. The present validation study shows that learners from varying non-English backgrounds successfully perform a version of this task in English, and display appropriate sensitivity to structural factors that also affect segmentation by native English listeners.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • D'Alessandra, Y., Devanna, P., Limana, F., Straino, S., Di Carlo, A., Brambilla, P. G., Rubino, M., Carena, M. C., Spazzafumo, L., De Simone, M., Micheli, B., Biglioli, P., Achilli, F., Martelli, F., Maggiolini, S., Marenzi, G., Pompilio, G., & Capogrossi, M. C. (2010). Circulating microRNAs are new and sensitive biomarkers of myocardial infarction. European Heart Journal, 31(22), 2765-2773. doi:10.1093/eurheartj/ehq167.

    Abstract

    Aims Circulating microRNAs (miRNAs) may represent a novel class of biomarkers; therefore, we examined whether acute myocardial infarction (MI) modulates miRNAs plasma levels in humans and mice. Methods and results Healthy donors (n = 17) and patients (n = 33) with acute ST-segment elevation MI (STEMI) were evaluated. In one cohort (n = 25), the first plasma sample was obtained 517 ± 309 min after the onset of MI symptoms and after coronary reperfusion with percutaneous coronary intervention (PCI); miR-1, -133a, -133b, and -499-5p were ∼15- to 140-fold control, whereas miR-122 and -375 were ∼87–90% lower than control; 5 days later, miR-1, -133a, -133b, -499-5p, and -375 were back to baseline, whereas miR-122 remained lower than control through Day 30. In additional patients (n = 8; four treated with thrombolysis and four with PCI), miRNAs and troponin I (TnI) were quantified simultaneously starting 156 ± 72 min after the onset of symptoms and at different times thereafter. Peak miR-1, -133a, and -133b expression and TnI level occurred at a similar time, whereas miR-499-5p exhibited a slower time course. In mice, miRNAs plasma levels and TnI were measured 15 min after coronary ligation and at different times thereafter. The behaviour of miR-1, -133a, -133b, and -499-5p was similar to STEMI patients; further, reciprocal changes in the expression levels of these miRNAs were found in cardiac tissue 3–6 h after coronary ligation. In contrast, miR-122 and -375 exhibited minor changes and no significant modulation. In mice with acute hind-limb ischaemia, there was no increase in the plasma level of the above miRNAs. Conclusion Acute MI up-regulated miR-1, -133a, -133b, and -499-5p plasma levels, both in humans and mice, whereas miR-122 and -375 were lower than control only in STEMI patients. These miRNAs represent novel biomarkers of cardiac damage.
  • Dalla Bella, S., Farrugia, F., Benoit, C.-E., Begel, V., Verga, L., Harding, E., & Kotz, S. A. (2017). BAASTA: Battery for the Assessment of Auditory Sensorimotor and Timing Abilities. Behavior Research Methods, 49(3), 1128-1145. doi:10.3758/s13428-016-0773-6.

    Abstract

    The Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA) is a new tool for the systematic assessment of perceptual and sensorimotor timing skills. It spans a broad range of timing skills aimed at differentiating individual timing profiles. BAASTA consists of sensitive time perception and production tasks. Perceptual tasks include duration discrimination, anisochrony detection (with tones and music), and a version of the Beat Alignment Task. Perceptual thresholds for duration discrimination and anisochrony detection are estimated with a maximum likelihood procedure (MLP) algorithm. Production tasks use finger tapping and include unpaced and paced tapping (with tones and music), synchronization-continuation, and adaptive tapping to a sequence with a tempo change. BAASTA was tested in a proof-of-concept study with 20 non-musicians (Experiment 1). To validate the results of the MLP procedure, less widespread than standard staircase methods, three perceptual tasks of the battery (duration discrimination, anisochrony detection with tones, and with music) were further tested in a second group of non-musicians using 2 down / 1 up and 3 down / 1 up staircase paradigms (n = 24) (Experiment 2). The results show that the timing profiles provided by BAASTA allow to detect cases of timing/rhythm disorders. In addition, perceptual thresholds yielded by the MLP algorithm, although generally comparable to the results provided by standard staircase, tend to be slightly lower. In sum, BAASTA provides a comprehensive battery to test perceptual and sensorimotor timing skills, and to detect timing/rhythm deficits.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.
  • Damian, M. F., Vigliocco, G., & Levelt, W. J. M. (2001). Effects of semantic context in the naming of pictures and words. Cognition, 81, B77-B86. doi:10.1016/S0010-0277(01)00135-4.

    Abstract

    Two experiments investigated whether lexical retrieval for speaking can be characterized as a competitive process by assessing the effects of semantic context on picture and word naming in German. In Experiment 1 we demonstrated that pictures are named slower in the context of same-category items than in the context of items from various semantic categories, replicating findings by Kroll and Stewart (Journal of Memory and Language, 33 (1994) 149). In Experiment 2 we used words instead of pictures. Participants either named the words in the context of same- or different-category items, or produced the words together with their corresponding determiner. While in the former condition words were named faster in the context of samecategory items than of different-category items, the opposite pattern was obtained for the latter condition. These findings confirm the claim that the interfering effect of semantic context reflects competition in the retrieval of lexical entries in speaking.
  • Dediu, D. (2017). From biology to language change and diversity. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 39-52). Berlin: Language Science Press.
  • Dediu, D., Janssen, R., & Moisik, S. R. (2017). Language is not isolated from its wider environment: Vocal tract influences on the evolution of speech and language. Language and Communication, 54, 9-20. doi:10.1016/j.langcom.2016.10.002.

    Abstract

    Language is not a purely cultural phenomenon somehow isolated from its wider environment, and we may only understand its origins and evolution by seriously considering its embedding in this environment as well as its multimodal nature. By environment here we understand other aspects of culture (such as communication technology, attitudes towards language contact, etc.), of the physical environment (ultraviolet light incidence, air humidity, etc.), and of the biological infrastructure for language and speech. We are specifically concerned in this paper with the latter, in the form of the biases, constraints and affordances that the anatomy and physiology of the vocal tract create on speech and language. In a nutshell, our argument is that (a) there is an under-appreciated amount of inter-individual variation in vocal tract (VT) anatomy and physiology, (b) variation that is non-randomly distributed across populations, and that (c) results in systematic differences in phonetics and phonology between languages. Relevant differences in VT anatomy include the overall shape of the hard palate, the shape of the alveolar ridge, the relationship between the lower and upper jaw, to mention just a few, and our data offer a new way to systematically explore such differences and their potential impact on speech. These differences generate very small biases that nevertheless can be amplified by the repeated use and transmission of language, affecting language diachrony and resulting in cross-linguistic synchronic differences. Moreover, the same type of biases and processes might have played an essential role in the emergence and evolution of language, and might allow us a glimpse into the speech and language of extinct humans by, for example, reconstructing the anatomy of parts of their vocal tract from the fossil record and extrapolating the biases we find in present-day humans.
  • Dediu, D. (2010). Linguistic and genetic diversity - how and why are they related? In M. Brüne, F. Salter, & W. McGrew (Eds.), Building bridges between anthropology, medicine and human ethology: Tributes to Wulf Schiefenhövel (pp. 169-178). Bochum: Europäischer Universitätsverlag.

    Abstract

    There are some 6000 languages spoken today, classfied in approximately 90 linguistic families and many isolates, and also differing across structural, typological, dimensions. Genetically, the human species is remarkably homogeneous, with the existant genetic diversity mostly explain by intra-population differences between individuals, but the remaining inter-population differences have a non-trivial structure. Populations splits and contacts influence both languages and genes, in principle allowing them to evolve in parallel ways. The farming/language co-dispersal hypothesis is a well-known such theory, whereby farmers spreading agriculture from its places of origin also spread their genes and languages. A different type of relationship was recently proposed, involving a genetic bias which influences the structural properties of language as it is transmitted across generations. Such a bias was proposed to explain the correlations between the distribution of tone languages and two brain development-related human genes and, if confirmed by experimental studies, it could represent a new factor explaining the distrbution of diversity. The present chapter overviews these related topics in the hope that a truly interdisciplinary approach could allow a better understanding of our complex (recent as well as evolutionary) history.
  • Deegan, B., Sturt, B., Ryder, D., Butcher, M., Brumby, S., Long, G., Badngarri, N., Lannigan, J., Blythe, J., & Wightman, G. (2010). Jaru animals and plants: Aboriginal flora and fauna knowledge from the south-east Kimberley and western Top End, north Australia. Halls Creek: Kimberley Language Resource Centre; Palmerston: Department of Natural Resources, Environment, the Arts and Sport.
  • Defina, R. (2010). Aspect and modality in Avatime. Master Thesis, Leiden University.
  • Den Os, E., & Boves, L. (2004). Natural multimodal interaction for design applications. In P. Cunningham (Ed.), Adoption and the knowledge economy (pp. 1403-1410). Amsterdam: IOS Press.
  • Deriziotis, P., & Fisher, S. E. (2017). Speech and Language: Translating the Genome. Trends in Genetics, 33(9), 642-656. doi:10.1016/j.tig.2017.07.002.

    Abstract

    Investigation of the biological basis of human speech and language is being transformed by developments in molecular technologies, including high-throughput genotyping and next-generation sequencing of whole genomes. These advances are shedding new light on the genetic architecture underlying language-related disorders (speech apraxia, specific language impairment, developmental dyslexia) as well as that contributing to variation in relevant skills in the general population. We discuss how state-of-the-art methods are uncovering a range of genetic mechanisms, from rare mutations of large effect to common polymorphisms that increase risk in a subtle way, while converging on neurogenetic pathways that are shared between distinct disorders. We consider the future of the field, highlighting the unusual challenges and opportunities associated with studying genomics of language-related traits.
  • Devaraju, K., Miskinyte, G., Hansen, M. G., Monni, E., Tornero, D., Woods, N. B., Bengzon, J., Ahlenius, H., Lindvall, O., & Kokaia, Z. (2017). Direct conversion of human fibroblasts to functional excitatory cortical neurons integrating into human neural networks. Stem Cell Research & Therapy, 8: 207. doi:10.1186/s13287-017-0658-3.

    Abstract

    Background: Human fibroblasts can be directly converted to several subtypes of neurons, but cortical projection neurons have not been generated. Methods: Here we screened for transcription factor combinations that could potentially convert human fibroblasts to functional excitatory cortical neurons. The induced cortical (iCtx) cells were analyzed for cortical neuronal identity using immunocytochemistry, single-cell quantitative polymerase chain reaction (qPCR), electrophysiology, and their ability to integrate into human neural networks in vitro and ex vivo using electrophysiology and rabies virus tracing. Results: We show that a combination of three ranscription fact ors, BRN2, MYT1L, and FEZF2, have the ability to directly convert human fibroblasts to functional excitatory cortical neurons. The conversion efficiency was increased to about 16% by treatment with small molecules and microRNAs. The iCtx cells exhibited electrophysiological properties of functional neurons, had pyramidal-like cell morphology, and expressed key cortical projection neuronal markers. Single-cell analysis of iCtx cells revealed a complex gene expression profile, a subpopulation of them displaying a molecular signature closely resembling that of human fetal primary cortical neurons. The iCtx cells received synaptic inputs from co-cultured human fetal primary cortical neurons, contained spines, and expressed the postsyna ptic excitatory scaffold protein PSD95. When transplanted ex vivo to organotypic cultures of adult human cerebral cortex, the iCtx cells exhibited morphological and electrophysiological properties of mature neurons, integrated structurally into the cortical tissue, and received synaptic inputs from adult human neurons. Conclusions: Our findings indicate that functional excitatory cortical neurons, generated here for the first time by direct conversion of human somatic cells, have the capacity for synaptic integration into adult human cortex.
  • Dimroth, C. (2004). Fokuspartikeln und Informationsgliederung im Deutschen. Tübingen: Stauffenburg.
  • Dimroth, C., & Klein, W. (1996). Fokuspartikeln in Lernervarietäten: Ein Analyserahmen und einige Beispiele. Zeitschrift für Literaturwissenschaft und Linguistik, 104, 73-114.
  • Dimroth, C., Andorno, C., Benazzo, S., & Verhagen, J. (2010). Given claims about new topics: How Romance and Germanic speakers link changed and maintained information in narrative discourse. Journal of Pragmatics, 42(12), 3328-3344. doi:10.1016/j.pragma.2010.05.009.

    Abstract

    This paper deals with the anaphoric linking of information units in spoken discourse in French, Italian, Dutch and German. We distinguish the information units ‘time’, ‘entity’, and ‘predicate’ and specifically investigate how speakers mark the information structure of their utterances and enhance discourse cohesion in contexts where the predicate contains given information but there is a change in one or more of the other information units. Germanic languages differ from Romance languages in the availability of a set of assertion-related particles (e.g. doch/toch, wel; roughly meaning ‘indeed’) and the option of highlighting the assertion component of a finite verb independently of its lexical content (verum focus). Based on elicited production data from 20 native speakers per language, we show that speakers of Dutch and German relate utterances to one another by focussing on this assertion component, and propose an analysis of the additive scope particles ook/auch (also) along similar lines. Speakers of Romance languages tend to highlight change or maintenance in the other information units. Such differences in the repertoire have consequences for the selection of units that are used for anaphoric linking. We conclude that there is a Germanic and a Romance way of signalling the information flow and enhancing discourse cohesion.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C. (2010). The acquisition of negation. In L. R. Horn (Ed.), The expression of negation (pp. 39-73). Berlin/New York: Mouton de Gruyter.
  • Dingemanse, M. (2017). Brain-to-brain interfaces and the role of language in distributing agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 59-66). Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190457204.003.0007.

    Abstract

    Brain-to-brain interfaces, in which brains are physically connected without the intervention of language, promise new ways of collaboration and communication between humans. I examine the narrow view of language implicit in current conceptions of brain-to-brain interfaces and put forward a constructive alternative, stressing the role of language in organising joint agency. Two features of language stand out as crucial: its selectivity, which provides people with much-needed filters between public words and private worlds; and its negotiability, which provides people with systematic opportunities for calibrating understanding and expressing consent and dissent. Without these checks and balances, brain-to-brain interfaces run the risk of reducing people to the level of amoeba in a slime mold; with them, they may mature to become useful extensions of human agency
  • Dingemanse, M., & Akita, K. (2017). An inverse relation between expressiveness and grammatical integration: on the morphosyntactic typology of ideophones, with special reference to Japanese. Journal of Linguistics, 53(3), 501-532. doi:10.1017/S002222671600030X.

    Abstract

    Words and phrases may differ in the extent to which they are susceptible to prosodic foregrounding and expressive morphology: their expressiveness. They may also differ in the degree to which they are integrated in the morphosyntactic structure of the utterance: their grammatical integration. We describe an inverse relation that holds across widely varied languages, such that more expressiveness goes together with less grammatical integration, and vice versa. We review typological evidence for this inverse relation in 10 languages, then quantify and explain it using Japanese corpus data. We do this by tracking ideophones —vivid sensory words also known as mimetics or expressives— across different morphosyntactic contexts and measuring their expressiveness in terms of intonation, phonation and expressive morphology. We find that as expressiveness increases, grammatical integration decreases. Using gesture as a measure independent of the speech signal, we find that the most expressive ideophones are most likely to come together with iconic gestures. We argue that the ultimate cause is the encounter of two distinct and partly incommensurable modes of representation: the gradient, iconic, depictive system represented by ideophones and iconic gestures and the discrete, arbitrary, descriptive system represented by ordinary words. The study shows how people combine modes of representation in speech and demonstrates the value of integrating description and depiction into the scientific vision of language.

    Additional information

    Open data & R code
  • Dingemanse, M. (2010). [Review of Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition. By Deborah Tannen]. Language in Society, 39(1), 139-140. doi:10.1017/S0047404509990765.

    Abstract

    Reviews the book, Talking voices: Repetition, dialogue, and imagery in conversational discourse. 2nd edition by Deborah Tannen. This book is the same as the 1989 original except for an added introduction. This introduction situates TV in the context of intertextuality and gives a survey of relevant research since the book first appeared. The strength of the book lies in its insightful analysis of the auditory side of conversation. Yet talking voices have always been embedded in richly contextualized multimodal speech events. As spontaneous and pervasive involvement strategies, both iconic gestures and ideophones should be of central importance to the analysis of conversational discourse. Unfortunately, someone who picks up this book is pretty much left in the dark about the prevalence of these phenomena in everyday face-to-face interaction all over the world.
  • Dingemanse, M. (2010). Folk definitions of ideophones. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 24-29). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529151.

    Abstract

    Ideophones are marked words that depict sensory events, for example English hippety-hoppety ‘in a limping and hobbling manner’ or Siwu mukumuku ‘mouth movements of a toothless person eating’. They typically have special sound patterns and distinct grammatical properties. Ideophones are found in many languages of the world, suggesting a common fascination with detailed sensory depiction, but reliable data on their meaning and use is still very scarce. This task involves video-recording spontaneous, informal explanations (“folk definitions”) of individual ideophones by native speakers, in their own language. The approach facilitates collection of rich primary data in a planned context while ensuring a large amount of spontaneity and freedom.
  • Dingemanse, M. (2017). Expressiveness and system integration: On the typology of ideophones, with special reference to Siwu. STUF - Language Typology and Universals, 70(2), 363-384. doi:10.1515/stuf-2017-0018.

    Abstract

    Ideophones are often described as words that are highly expressive and morphosyntactically marginal. A study of ideophones in everyday conversations in Siwu (Kwa, eastern Ghana) reveals a landscape of variation and change that sheds light on some larger questions in the morphosyntactic typology of ideophones. The article documents a trade-off between expressiveness and morphosyntactic integration, with high expressiveness linked to low integration and vice versa. It also describes a pathway for deideophonisation and finds that frequency of use is a factor that influences the degree to which ideophones can come to be more like ordinary words. The findings have implications for processes of (de)ideophonisation, ideophone borrowing, and ideophone typology. A key point is that the internal diversity we find in naturally occurring data, far from being mere noise, is patterned variation that can help us to get a handle on the factors shaping ideophone systems within and across languages.
  • Dingemanse, M. (2017). On the margins of language: Ideophones, interjections and dependencies in linguistic theory. In N. J. Enfield (Ed.), Dependencies in language (pp. 195-202). Berlin: Language Science Press. doi:10.5281/zenodo.573781.

    Abstract

    Linguistic discovery is viewpoint-dependent, just like our ideas about what is marginal and what is central in language. In this essay I consider two supposed marginalia —ideophones and interjections— which provide some useful pointers for widening our field of view. Ideophones challenge us to take a fresh look at language and consider how it is that our communication system combines multiple modes of representation. Interjections challenge us to extend linguistic inquiry beyond sentence level, and remind us that language is social-interactive at core. Marginalia, then, are not the obscure, exotic phenomena that can be safely ignored: they represent opportunities for innovation and invite us to keep pushing the edges of linguistic inquiry.
  • Dingemanse, M., Rossi, G., & Floyd, S. (2017). Place reference in story beginnings: a cross-linguistic study of narrative and interactional affordances. Language in Society, 46(2), 129-158. doi:10.1017/S0047404516001019.

    Abstract

    People often begin stories in conversation by referring to person, time, and place. We study story beginnings in three societies and find place reference is recurrently used to (i) set the stage, foreshadowing the type of story and the kind of response due, and to (ii) make the story cohere, anchoring elements of the developing story. Recipients orient to these interactional affordances of place reference by responding in ways that attend to the relevance of place for the story and by requesting clarification when references are incongruent or noticeably absent. The findings are based on 108 story beginnings in three unrelated languages: Cha’palaa, a Barbacoan language of Ecuador; Northern Italian, a Romance language of Italy; and Siwu, a Kwa language of Ghana. The commonalities suggest we have identified generic affordances of place reference, and that storytelling in conversation offers a robust sequential environment for systematic comparative research on conversational structures.
  • Dobel, C. E., Meyer, A. S., & Levelt, W. J. M. (2001). Registrierung von Augenbewegungen bei Studien zur Sprachproduktion. In A. Zimmer (Ed.), Experimentelle Psychologie. Proceedings of 43. Tagung experimentell arbeitender Psychologen (pp. 116-122). Lengerich, Germany: Pabst Science Publishers.
  • Dobel, C., Pulvermüller, F., Härle, M., Cohen, R., Köbbel, P., Schönle, P. W., & Rockstroh, B. (2001). Syntactic and semantic processing in the healthy and aphasic human brain. Experimental Brain Research, 140(1), 77-85. doi:10.1007/s002210100794.

    Abstract

    A syntactic and a semantic task were per-formed by German-speaking healthy subjects and apha-sics with lesions in the dominant left hemisphere. In both
    tasks, pictures of objects were presented that had to be classified by pressing buttons. The classification was into grammatical gender in the syntactic task (masculine or feminine gender?) and into semantic category in the se-
    mantic task (man- or nature made?). Behavioral data revealed a significant Group by Task interaction, with
    aphasics showing most pronounced problems with syn-
    tax. Brain event-related potentials 300–600 ms following picture onset showed different task-dependent laterality
    patterns in the two groups. In controls, the syntax task
    induced a left-lateralized negative ERP, whereas the semantic task produced more symmetric responses over the hemispheres. The opposite was the case in the patients, where, paradoxically, stronger laterality of physio-logical brain responses emerged in the semantic task than in the syntactic task. We interpret these data based on neuro-psycholinguistic models of word processing and current theories about the roles of the hemispheres in language recovery.
  • Dolscheid, S., Shayan, S., Ozturk, O., Majid, A., & Casasanto, D. (2010). Language shapes mental representations of musical pitch: Implications for metaphorical language processing [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 137). York: University of York.

    Abstract

    Speakers often use spatial metaphors to talk about musical pitch (e.g., a low note, a high soprano). Previous experiments suggest that English speakers also think about pitches as high or low in space, even when theyʼre not using language or musical notation (Casasanto, 2010). Do metaphors in language merely reflect pre-existing associations between space and pitch, or might language also shape these non-linguistic metaphorical mappings? To investigate the role of language in pitch tepresentation, we conducted a pair of non-linguistic spacepitch interference experiments in speakers of two languages that use different spatial metaphors. Dutch speakers usually describe pitches as ʻhighʼ (hoog) and ʻlowʼ (laag). Farsi speakers, however, often describe high-frequency pitches as ʻthinʼ (naazok) and low-frequency pitches as ʻthickʼ (koloft). Do Dutch and Farsi speakers mentally represent pitch differently? To find out, we asked participants to reproduce musical pitches that they heard in the presence of irrelevant spatial information (i.e., lines that varied either in height or in thickness). For the Height Interference experiment, horizontal lines bisected a vertical reference line at one of nine different locations. For the Thickness Interference experiment, a vertical line appeared in the middle of the screen in one of nine thicknesses. In each experiment, the nine different lines were crossed with nine different pitches ranging from C4 to G#4 in semitone increments, to produce 81 distinct trials. If Dutch and Farsi speakers mentally represent pitch the way they talk about it, using different kinds of spatial representations, they should show contrasting patterns of cross-dimensional interference: Dutch speakersʼ pitch estimates should be more strongly affected by irrelevant height information, and Farsi speakersʼ by irrelevant thickness information. As predicted, Dutch speakersʼ pitch estimates were significantly modulated by spatial height but not by thickness. Conversely, Farsi speakersʼ pitch estimates were modulated by spatial thickness but not by height (2x2 ANOVA on normalized slopes of the effect of space on pitch: F(1,71)=17,15 p<.001). To determine whether language plays a causal role in shaping pitch representations, we conducted a training experiment. Native Dutch speakers learned to use Farsi-like metaphors, describing pitch relationships in terms of thickness (e.g., a cello sounds ʻthickerʼ than a flute). After training, Dutch speakers showed a significant effect of Thickness interference in the non-linguistic pitch reproduction task, similar to native Farsi speakers: on average, pitches accompanied by thicker lines were reproduced as lower in pitch (effect of thickness on pitch: r=-.22, p=.002). By conducting psychophysical tasks, we tested the ʻWhorfianʼ question without using words. Yet, results also inform theories of metaphorical language processing. According to psycholinguistic theories (e.g., Bowdle & Gentner, 2005), highly conventional metaphors are processed without any active mapping from the source to the target domain (e.g., from space to pitch). Our data, however, suggest that when people use verbal metaphors they activate a corresponding non-linguistic mapping from either height or thickness to pitch, strengthening this association at the expense of competing associations. As a result, people who use different metaphors in their native languages form correspondingly different representations of musical pitch. Casasanto, D. (2010). Space for Thinking. In Language, Cognition and Space: State of the art and new directions. V. Evans & P. Chilton (Eds.), 453-478, London: Equinox Publishing. Bowdle, B. & Gentner, D. (2005). The career of metaphor. Psychological Review, 112, 193-216.
  • Doumas, L. A. A., Hamer, A., Puebla, G., & Martin, A. E. (2017). A theory of the detection and learning of structured representations of similarity and relative magnitude. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 1955-1960). Austin, TX: Cognitive Science Society.

    Abstract

    Responding to similarity, difference, and relative magnitude (SDM) is ubiquitous in the animal kingdom. However, humans seem unique in the ability to represent relative magnitude (‘more’/‘less’) and similarity (‘same’/‘different’) as abstract relations that take arguments (e.g., greater-than (x,y)). While many models use structured relational representations of magnitude and similarity, little progress has been made on how these representations arise. Models that developuse these representations assume access to computations of similarity and magnitude a priori, either encoded as features or as output of evaluation operators. We detail a mechanism for producing invariant responses to “same”, “different”, “more”, and “less” which can be exploited to compute similarity and magnitude as an evaluation operator. Using DORA (Doumas, Hummel, & Sandhofer, 2008), these invariant responses can serve be used to learn structured relational representations of relative magnitude and similarity from pixel images of simple shapes
  • Drexler, H., Verbunt, A., & Wittenburg, P. (1996). Max Planck Electronic Information Desk. In B. den Brinker, J. Beek, A. Hollander, & R. Nieuwboer (Eds.), Zesde workshop computers in de psychologie: Programma en uitgebreide samenvattingen (pp. 64-66). Amsterdam: Vrije Universiteit Amsterdam, IFKB.
  • Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

    Abstract

    Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

    Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

    Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2017). L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. The Journal of the Acoustical Society of America, 142(5), 3058-3068. doi:10.1121/1.5010169.

    Abstract

    Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
  • Drude, S. (2001). Entschlüsselung einer unbekannten Indianersprache: Ein Projekt zur Dokumentation der bedrohten brasilianischen Indianersprache Awetí. Fundiert: Das Wissenschaftsmagazin der Freien Universität Berlin, 2, 112-121. Retrieved from http://www.elfenbeinturm.net/archiv/2001/lust3.html.

    Abstract

    Die Awetí sind ein kleiner Indianerstamm in Zentralbrasilien, der bislang nur wenig Kontakt mit Weißen hatte. Im Zuge eines Programms der Volkswagenstiftung zur Dokumentation bedrohter Sprachen wird unser Autor die Awetí erneut besuchen und berichtet als „jüngerer Bruder des Häuptlings“ über seine Bemühungen, die Sprache der Awetí für künftige Generationen festzuhalten.
  • Drude, S. (2004). Wörterbuchinterpretation: Integrative Lexikographie am Beispiel des Guaraní. Tübingen: Niemeyer.

    Abstract

    This study provides an answer to the question of how dictionaries should be read. For this purpose, articles taken from an outline for a Guaraní-German dictionary geared to established lexicographic practice are provided with standardized interpretations. Each article is systematically assigned a formal sentence making its meaning explicit both for content words (including polysemes) and functional words or affixes. Integrative Linguistics proves its theoretical and practical value both for the description of Guaraní (indigenous Indian language spoken in Paraguay, Argentina and Brazil) and in metalexicographic terms.
  • Dugoujon, J.-M., Larrouy, G., Mazières, S., Brucato, N., Sevin, A., Cassar, O., & Gessain, A. (2010). Histoire et dynamique du peuplement humain en Amazonie: L’exemple de la Guyane. In A. Pavé, & G. Fornet (Eds.), Amazonie: Une aventure scientifique et humaine du CNRS (pp. 128-132). Paris: Galaade Éditions.
  • Dunn, M., & Terrill, A. (2004). Lexical comparison between Papuan languages: Inland bird and tree species. In A. Majid (Ed.), Field Manual Volume 9 (pp. 65-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492942.

    Abstract

    The Pioneers project seeks to uncover relationships between the Papuan languages of Island Melanesia. One basic way to uncover linguistic relationships, either contact or genetic, is through lexical comparison. We have seen very few shared words between our Papuan languages and any other languages, either Oceanic or Papuan, but most of the words which are shared are shared because they are commonly borrowed from Oceanic languages. This task is aimed at enabling fieldworkers to collect terms for inland bird and tree species. In the past it is has proved very difficult for non-experts to identify plant and bird species, so the task consists of a booklet of colour pictures of some of the more common species, with information on the range and habits of each species, as well as some information on their cultural uses, which should enable better identification. It is intended that fieldworkers will show this book to consultants and use it as an elicitation aid.
  • Edmiston, P., Perlman, M., & Lupyan, G. (2017). Creating words from iterated vocal imitation. In G. Gunzelman, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 331-336). Austin, TX: Cognitive Science Society.

    Abstract

    We report the results of a large-scale (N=1571) experiment to investigate whether spoken words can emerge from the process of repeated imitation. Participants played a version of the children’s game “Telephone”. The first generation was asked to imitate recognizable environmental sounds (e.g., glass breaking, water splashing); subsequent generations imitated the imitators for a total of 8 generations. We then examined whether the vocal imitations became more stable and word-like, retained a resemblance to the original sound, and became more suitable as learned category labels. The results showed (1) the imitations became progressively more word-like, (2) even after 8 generations, they could be matched above chance to the environmental sound that motivated them, and (3) imitations from later generations were more effective as learned category labels. These results show how repeated imitation can create progressively more word-like forms while retaining a semblance of iconicity.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eising, E., Shyti, R., 'T hoen, P. A. C., Vijfhuizen, L. S., Huisman, S. M. H., Broos, L. A. M., Mahfourz, A., Reinders, M. J. T., Ferrrari, M. D., Tolner, E. A., De Vries, B., & Van den Maagdenberg, A. M. J. M. (2017). Cortical spreading depression causes unique dysregulation of inflammatory pathways in a transgenic mouse model of migraine. Molecular Biology, 54(4), 2986-2996. doi:10.1007/s12035-015-9681-5.

    Abstract

    Familial hemiplegic migraine type 1 (FHM1) is a
    rare monogenic subtype of migraine with aura caused by mutations
    in CACNA1A that encodes the α1A subunit of voltagegated
    CaV2.1 calcium channels. Transgenic knock-in mice
    that carry the human FHM1 R192Q missense mutation
    (‘FHM1 R192Q mice’) exhibit an increased susceptibility to
    cortical spreading depression (CSD), the mechanism underlying
    migraine aura. Here, we analysed gene expression profiles
    from isolated cortical tissue of FHM1 R192Q mice 24 h after
    experimentally induced CSD in order to identify molecular
    pathways affected by CSD. Gene expression profiles were
    generated using deep serial analysis of gene expression sequencing.
    Our data reveal a signature of inflammatory signalling
    upon CSD in the cortex of both mutant and wild-type
    mice. However, only in the brains of FHM1 R192Q mice
    specific genes are up-regulated in response to CSD that are
    implicated in interferon-related inflammatory signalling. Our
    findings show that CSD modulates inflammatory processes in
    both wild-type and mutant brains, but that an additional
    unique inflammatory signature becomes expressed after
    CSD in a relevant mouse model of migraine.

Share this page