Publications

Displaying 201 - 300 of 1243
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Burchfield, A., & Antoniou, M. (2019). A criterial interlocutor tally for successful talker adaptation? In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1485-1489). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Part of the remarkable efficiency of listening is
    accommodation to unfamiliar talkers’ specific
    pronunciations by retuning of phonemic intercategory
    boundaries. Such retuning occurs in second
    (L2) as well as first language (L1); however, recent
    research with emigrés revealed successful adaptation
    in the environmental L2 but, unprecedentedly, not in
    L1 despite continuing L1 use. A possible explanation
    involving relative exposure to novel talkers is here
    tested in heritage language users with Mandarin as
    family L1 and English as environmental language. In
    English, exposure to an ambiguous sound in
    disambiguating word contexts prompted the expected
    adjustment of phonemic boundaries in subsequent
    categorisation. However, no adjustment occurred in
    Mandarin, again despite regular use. Participants
    reported highly asymmetric interlocutor counts in the
    two languages. We conclude that successful retuning
    ability requires regular exposure to novel talkers in
    the language in question, a criterion not met for the
    emigrés’ or for these heritage users’ L1.
  • Cutler, A. (1979). Contemporary reaction to Rudolf Meringer’s speech error research. Historiograpia Linguistica, 6, 57-76.
  • Cutler, A., & Butterfield, S. (1990). Durational cues to word boundaries in clear speech. Speech Communication, 9, 485-495.

    Abstract

    One of a listener’s major tasks in understanding continuous speech in segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately clear speech. We found that speakers do indeed attempt to makr word boundaries; moreover, they differentiate between word boundaries in a way which suggest they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., McQueen, J. M., & Robinson, K. (1990). Elizabeth and John: Sound patterns of men’s and women’s names. Journal of Linguistics, 26, 471-482. doi:10.1017/S0022226700014754.
  • Cutler, A., & Davis, C. (2012). An orthographic effect in phoneme processing, and its limitations. Frontiers in Psychology, 3, 18. doi:10.3389/fpsyg.2012.00018.

    Abstract

    To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless) than in words spelled with C (e.g., voice). No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless) versus with C (e.g., floice) failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A. (2012). Native listening: The flexibility dimension. Dutch Journal of Applied Linguistics, 1(2), 169-187.

    Abstract

    The way we listen to spoken language is tailored to the specific benefit of native-language speech input. Listening to speech in non-native languages can be significantly hindered by this native bias. Is it possible to determine the degree to which a listener is listening in a native-like manner? Promising indications of how this question may be tackled are provided by new research findings concerning the great flexibility that characterises listening to the L1, in online adjustment of phonetic category boundaries for adaptation across talkers, and in modulation of lexical dynamics for adjustment across listening conditions. This flexibility pays off in many dimensions, including listening in noise, adaptation across dialects, and identification of voices. These findings further illuminate the robustness and flexibility of native listening, and potentially point to ways in which we might begin to assess degrees of ‘native-likeness’ in this skill.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A., Otake, T., & Bruggeman, L. (2012). Phonologically determined asymmetries in vocabulary structure across languages. Journal of the Acoustical Society of America, 132(2), EL155-EL160. doi:10.1121/1.4737596.

    Abstract

    Studies of spoken-word recognition have revealed that competition from embedded words differs in strength as a function of where in the carrier word the embedded word is found and have further shown embedding patterns to be skewed such that embeddings in initial position in carriers outnumber embeddings in final position. Lexico-statistical analyses show that this skew is highly attenuated in Japanese, a noninflectional language. Comparison of the extent of the asymmetry in the three Germanic languages English, Dutch, and German allows the source to be traced to a combination of suffixal morphology and vowel reduction in unstressed syllables.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A. (2015). Representation of second language phonology. Applied Psycholinguistics, 36(1), 115-128. doi:10.1017/S0142716414000459.

    Abstract

    Orthographies encode phonological information only at the level of words (chiefly, the information encoded concerns phonetic segments; in some cases, tonal information or default stress may be encoded). Of primary interest to second language (L2) learners is whether orthography can assist in clarifying L2 phonological distinctions that are particularly difficult to perceive (e.g., where one native-language phonemic category captures two L2 categories). A review of spoken-word recognition evidence suggests that orthographic information can install knowledge of such a distinction in lexical representations but that this does not affect learners’ ability to perceive the phonemic distinction in speech. Words containing the difficult phonemes become even harder for L2 listeners to recognize, because perception maps less accurately to lexical content.
  • Cutler, A., & Scott, D. R. (1990). Speaker sex and perceived apportionment of talk. Applied Psycholinguistics, 11, 253-272. doi:10.1017/S0142716400008882.

    Abstract

    It is a widely held belief that women talk more than men; but experimental evidence has suggested that this belief is mistaken. The present study investigated whether listener bias contributes to this mistake. Dialogues were recorded in mixed-sex and single-sex versions, and male and female listeners judged the proportions of talk contributed to the dialogues by each participant. Female contributions to mixed-sex dialogues were rated as greater than male contributions by both male and female listeners. Female contributions were more likely to be overestimated when they were speaking a dialogue part perceived as probably female than when they were speaking a dialogue part perceived as probably male. It is suggested that the misestimates are due to a complex of factors that may involve both perceptual effects such as misjudgment of rates of speech and sociological effects such as attitudes to social roles and perception of power relations.
  • Cutler, A. (1990). Syllabic lengthening as a word boundary cue. In R. Seidl (Ed.), Proceedings of the 3rd Australian International Conference on Speech Science and Technology (pp. 324-328). Canberra: Australian Speech Science and Technology Association.

    Abstract

    Bisyllabic sequences which could be interpreted as one word or two were produced in sentence contexts by a trained speaker, and syllabic durations measured. Listeners judged whether the bisyllables, excised from context, were one word or two. The proportion of two-word choices correlated positively with measured duration, but only for bisyllables stressed on the second syllable. The results may suggest a limit for listener sensitivity to syllabic lengthening as a word boundary cue.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A., Norris, D., & Van Ooijen, B. (1990). Vowels as phoneme detection targets. In Proceedings of the First International Conference on Spoken Language Processing (pp. 581-584).

    Abstract

    Phoneme detection is a psycholinguistic task in which listeners' response time to detect the presence of a pre-specified phoneme target is measured. Typically, detection tasks have used consonant targets. This paper reports two experiments in which subjects responded to vowels as phoneme detection targets. In the first experiment, targets occurred in real words, in the second in nonsense words. Response times were long by comparison with consonantal targets. Targets in initial syllables were responded to much more slowly than targets in second syllables. Strong vowels were responded to faster than reduced vowels in real words but not in nonwords. These results suggest that the process of phoneme detection produces different results for vowels and for consonants. We discuss possible explanations for this difference, in particular the possibility of language-specificity.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cysouw, M., Dediu, D., & Moran, S. (2012). Comment on “Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa”. Science, 335, 657-b. doi:10.1126/science.1208841.

    Abstract

    We show that Atkinson’s (Reports, 15 April 2011, p. 346) intriguing proposal—that global
    linguistic diversity supports a single language origin in Africa—is an artifact of using suboptimal
    data, biased methodology, and unjustified assumptions. We criticize his approach using more
    suitable data, and we additionally provide new results suggesting a more complex scenario for the
    emergence of global linguistic diversity.
  • Dagklis, A., Ponzoni, M., Govi, S., Cangi, M. G., Pasini, E., Charlotte, F., Vino, A., Doglioni, C., Davi, F., Lossos, I. S., Ntountas, I., Papadaki, T., Dolcetti, R., Ferreri, A. J. M., Stamatopoulos, K., & Ghia, P. (2012). Immunoglobulin gene repertoire in ocular adnexal lymphomas: hints on the nature of the antigenic stimulation. Leukemia, 26, 814-821. doi:10.1038/leu.2011.276.

    Abstract

    Evidence from certain geographical areas links lymphomas of the ocular adnexa marginal zone B-cell lymphomas (OAMZL) with Chlamydophila psittaci (Cp) infection, suggesting that lymphoma development is dependent upon chronic stimulation by persistent infections. Notwithstanding that, the actual immunopathogenetical mechanisms have not yet been elucidated. As in other B-cell lymphomas, insight into this issue, especially with regard to potential selecting ligands, could be provided by analysis of the immunoglobulin (IG) receptors of the malignant clones. To this end, we studied the molecular features of IGs in 44 patients with OAMZL (40% Cp-positive), identifying features suggestive of a pathogenic mechanism of autoreactivity. Herein, we show that lymphoma cells express a distinctive IG repertoire, with electropositive antigen (Ag)-binding sites, reminiscent of autoantibodies (auto-Abs) recognizing DNA. Additionally, five (11%) cases of OAMZL expressed IGs homologous with autoreactive Abs or IGs of patients with chronic lymphocytic leukemia, a disease known for the expression of autoreactive IGs by neoplastic cells. In contrast, no similarity with known anti-Chlamydophila Abs was found. Taken together, these results strongly indicate that OAMZL may originate from B cells selected for their capability to bind Ags and, in particular, auto-Ags. In OAMZL associated with Cp infection, the pathogen likely acts indirectly on the malignant B cells, promoting the development of an inflammatory milieu, where auto-Ags could be exposed and presented, driving proliferation and expansion of self-reactive B cells.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.
  • Davidson, D. J., Hanulikova, A., & Indefrey, P. (2012). Electrophysiological correlates of morphosyntactic integration in German phrasal context. Language and Cognitive Processes, 27, 288-311. doi:10.1080/01690965.2011.616448.

    Abstract

    The morphosyntactic paradigm of an inflected word can influence isolated word recognition, but its role in multiple-word phrasal integration is less clear. We examined the electrophysiological response to adjectives in short German prepositional phrases to evaluate whether strong and weak forms of the adjective show a differential response, and whether paradigm variables are related to this response. Twenty native German speakers classified serially presented phrases as grammatically correct or not while the electroencephalogram (EEG) was recorded. A functional mixed effects model of the response to grammatically correct trials revealed a differential response to strong and weak forms of the adjectives. This response difference depended on whether the preceding preposition imposed accusative or dative case. The lexically conditioned information content of the adjectives modulated a later interval of the response. The results indicate that grammatical context modulates the response to morphosyntactic information content, and lends support to the role of paradigm structure in integrative phrasal processing.
  • Dediu, D., & Levinson, S. C. (2012). Abstract profiles of structural stability point to universal tendencies, family-specific factors, and ancient connections between languages. PLoS One, 7(9), e45198. doi:10.1371/journal.pone.0045198.

    Abstract

    Language is the best example of a cultural evolutionary system, able to retain a phylogenetic signal over many thousands of years. The temporal stability (conservatism) of basic vocabulary is relatively well understood, but the stability of the structural properties of language (phonology, morphology, syntax) is still unclear. Here we report an extensive Bayesian phylogenetic investigation of the structural stability of numerous features across many language families and we introduce a novel method for analyzing the relationships between the “stability profiles” of language families. We found that there is a strong universal component across language families, suggesting the existence of universal linguistic, cognitive and genetic constraints. Against this background, however, each language family has a distinct stability profile, and these profiles cluster by geographic area and likely deep genealogical relationships. These stability profiles reveal, for example, the ancient historical relationships between the Siberian and American language families, presumed to be separated by at least 12,000 years. Thus, such higher-level properties of language seen as an evolutionary system might allow the investigation of ancient connections between languages and shed light on the peopling of the world.

    Additional information

    journal.pone.0045198.s001.pdf
  • Dediu, D., & Dingemanse, M. (2012). More than accent: Linguistic and cultural cues in the emergence of tag-based cooperation [Commentary]. Current Anthropology, 53, 606-607. doi:10.1086/667654.

    Abstract

    Commentary on Cohen, E. (2012). The evolution of tag-based cooperation in humans: The case for accent. Current Anthropology, 53, 588-616. doi:10.1086/667654.
  • Dediu, D., & Moisik, S. R. (2019). Pushes and pulls from below: Anatomical variation, articulation and sound change. Glossa: A Journal of General Linguistics, 4(1): 7. doi:10.5334/gjgl.646.

    Abstract

    This paper argues that inter-individual and inter-group variation in language acquisition, perception, processing and production, rooted in our biology, may play a largely neglected role in sound change. We begin by discussing the patterning of these differences, highlighting those related to vocal tract anatomy with a foundation in genetics and development. We use our ArtiVarK database, a large multi-ethnic sample comprising 3D intraoral optical scans, as well as structural, static and real-time MRI scans of vocal tract anatomy and speech articulation, to quantify the articulatory strategies used to produce the North American English /r/ and to statistically show that anatomical factors seem to influence these articulatory strategies. Building on work showing that these alternative articulatory strategies may have indirect coarticulatory effects, we propose two models for how biases due to variation in vocal tract anatomy may affect sound change. The first involves direct overt acoustic effects of such biases that are then reinterpreted by the hearers, while the second is based on indirect coarticulatory phenomena generated by acoustically covert biases that produce overt “at-a-distance” acoustic effects. This view implies that speaker communities might be “poised” for change because they always contain pools of “standing variation” of such biased speakers, and when factors such as the frequency of the biased speakers in the community, their positions in the communicative network or the topology of the network itself change, sound change may rapidly follow as a self-reinforcing network-level phenomenon, akin to a phase transition. Thus, inter-speaker variation in structured and dynamic communicative networks may couple the initiation and actuation of sound change.
  • Dediu, D., Janssen, R., & Moisik, S. R. (2019). Weak biases emerging from vocal tract anatomy shape the repeated transmission of vowels. Nature Human Behaviour, 3, 1107-1115. doi:10.1038/s41562-019-0663-x.

    Abstract

    Linguistic diversity is affected by multiple factors, but it is usually assumed that variation in the anatomy of our speech organs
    plays no explanatory role. Here we use realistic computer models of the human speech organs to test whether inter-individual
    and inter-group variation in the shape of the hard palate (the bony roof of the mouth) affects acoustics of speech sounds. Based
    on 107 midsagittal MRI scans of the hard palate of human participants, we modelled with high accuracy the articulation of a set
    of five cross-linguistically representative vowels by agents learning to produce speech sounds. We found that different hard
    palate shapes result in subtle differences in the acoustics and articulatory strategies of the produced vowels, and that these
    individual-level speech idiosyncrasies are amplified by the repeated transmission of language across generations. Therefore,
    we suggest that, besides culture and environment, quantitative biological variation can be amplified, also influencing language.
  • Defina, R., & Majid, A. (2012). Conceptual event units of putting and taking in two unrelated languages. In N. Miyake, D. Peebles, & R. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 1470-1475). Austin, TX: Cognitive Science Society.

    Abstract

    People automatically chunk ongoing dynamic events into discrete units. This paper investigates whether linguistic structure is a factor in this process. We test the claim that describing an event with a serial verb construction will influence a speaker’s conceptual event structure. The grammar of Avatime (a Kwa language spoken in Ghana)requires its speakers to describe some, but not all, placement events using a serial verb construction which also encodes the preceding taking event. We tested Avatime and English speakers’ recognition memory for putting and taking events. Avatime speakers were more likely to falsely recognize putting and taking events from episodes associated with takeput serial verb constructions than from episodes associated with other constructions. English speakers showed no difference in false recognitions between episode types. This demonstrates that memory for episodes is related to the type of language used; and, moreover, across languages different conceptual representations are formed for the same physical episode, paralleling habitual linguistic practices
  • Demir, Ö. E., So, W.-C., Ozyurek, A., & Goldin-Meadow, S. (2012). Turkish- and English-speaking children display sensitivity to perceptual context in referring expressions they produce in speech and gesture. Language and Cognitive Processes, 27, 844 -867. doi:10.1080/01690965.2011.589273.

    Abstract

    Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.
  • Demontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O. and 61 moreDemontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O., Hansen, C. S., Hauberg, M. E., Hollegaard, M. V., Howrigan, D. P., Huang, H., Maller, J. B., Martin, A. R., Martin, N. G., Moran, J., Pallesen, J., Palmer, D. S., Pedersen, C. B., Pedersen, M. G., Poterba, T., Poulsen, J. B., Ripke, S., Robinson, E. B., Satterstrom, F. K., Stefansson, H., Stevens, C., Turley, P., Walters, G. B., Won, H., Wright, M. J., ADHD Working Group of the Psychiatric Genomics Consortium (PGC), EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, 23andme Research Team, Andreassen, O. A., Asherson, P., Burton, C. L., Boomsma, D. I., Cormand, B., Dalsgaard, S., Franke, B., Gelernter, J., Geschwind, D., Hakonarson, H., Haavik, J., Kranzler, H. R., Kuntsi, J., Langley, K., Lesch, K.-P., Middeldorp, C., Reif, A., Rohde, L. A., Roussos, P., Schachar, R., Sklar, P., Sonuga-Barke, E. J. S., Sullivan, P. F., Thapar, A., Tung, J. Y., Waldman, I. D., Medland, S. E., Stefansson, K., Nordentoft, M., Hougaard, D. M., Werge, T., Mors, O., Mortensen, P. B., Daly, M. J., Faraone, S. V., Børglum, A. D., & Neale, B. (2019). Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nature Genetics, 51, 63-75. doi:10.1038/s41588-018-0269-7.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a highly heritable childhood behavioral disorder affecting 5% of children and 2.5% of adults. Common genetic variants contribute substantially to ADHD susceptibility, but no variants have been robustly associated with ADHD. We report a genome-wide association meta-analysis of 20,183 individuals diagnosed with ADHD and 35,191 controls that identifies variants surpassing genome-wide significance in 12 independent loci, finding important new information about the underlying biology of ADHD. Associations are enriched in evolutionarily constrained genomic regions and loss-of-function intolerant genes and around brain-expressed regulatory marks. Analyses of three replication studies: a cohort of individuals diagnosed with ADHD, a self-reported ADHD sample and a meta-analysis of quantitative measures of ADHD symptoms in the population, support these findings while highlighting study-specific differences on genetic overlap with educational attainment. Strong concordance with GWAS of quantitative population measures of ADHD symptoms supports that clinical diagnosis of ADHD is an extreme expression of continuous heritable traits.
  • DePape, A., Chen, A., Hall, G., & Trainor, L. (2012). Use of prosody and information structure in high functioning adults with Autism in relation to language ability. Frontiers in Psychology, 3, 72. doi:10.3389/fpsyg.2012.00072.

    Abstract

    Abnormal prosody is a striking feature of the speech of those with Autism Spectrum Disorder (ASD), but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without Autism Spectrum Disorder and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language function. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.
  • Diaz, B., Hintz, F., Kiebel, S. J., & von Kriegstein, K. (2012). Dysfunction of the auditory thalamus in developmental dyslexia. Proceedings of the National Academy of Sciences of the United States of America, 109(34), 13841-13846. doi:10.1073/pnas.1119828109.

    Abstract

    Developmental dyslexia, a severe and persistent reading and spelling impairment, is characterized by difficulties in processing speech sounds (i.e., phonemes). Here, we test the hypothesis that these phonological difficulties are associated with a dysfunction of the auditory sensory thalamus, the medial geniculate body (MGB). By using functional MRI, we found that, in dyslexic adults, the MGB responded abnormally when the task required attending to phonemes compared with other speech features. No other structure in the auditory pathway showed distinct functional neural patterns between the two tasks for dyslexic and control participants. Furthermore, MGB activity correlated with dyslexia diagnostic scores, indicating that the task modulation of the MGB is critical for performance in dyslexics. These results suggest that deficits in dyslexia are associated with a failure of the neural mechanism that dynamically tunes MGB according to predictions from cortical areas to optimize speech processing. This view on task-related MGB dysfunction in dyslexics has the potential to reconcile influential theories of dyslexia within a predictive coding framework of brain function.

    Files private

    Request files
  • Díaz, B., Mitterer, H., Broersma, M., & Sebastián-Gallés, N. (2012). Individual differences in late bilinguals' L2 phonological processes: From acoustic-phonetic analysis to lexical access. Learning and Individual Differences, 22, 680-689. doi:10.1016/j.lindif.2012.05.005.

    Abstract

    The extent to which the phonetic system of a second language is mastered varies across individuals. The present study evaluates the pattern of individual differences in late bilinguals across different phonological processes. Fifty-five late Dutch-English bilinguals were tested on their ability to perceive a difficult L2 speech contrast (the English /æ/-/ε/ contrast) in three different tasks: A categorization task, a word identification task and a lexical decision task. As a group, L2 listeners were less accurate than native listeners. However, at the individual level, almost half of the L2 listeners scored within the native range in the categorization task whereas a small percentage scored within the native range in the identification and lexical decision tasks. These results show that L2 listeners' performance crucially depends on the nature of the task, with higher L2 listener accuracy on an acoustic-phonetic analysis task than on tasks involving lexical processes. These findings parallel previous results for early bilinguals, where the pattern of performance was consistent with the processing hierarchy proposed by different models of speech perception. The results indicate that the analysis of patterns of non-native performance can provide important insights concerning the architecture of the speech perception system and the issue of language learnability.
  • Dideriksen, C., Fusaroli, R., Tylén, K., Dingemanse, M., & Christiansen, M. H. (2019). Contextualizing Conversational Strategies: Backchannel, Repair and Linguistic Alignment in Spontaneous and Task-Oriented Conversations. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019) (pp. 261-267). Montreal, QB: Cognitive Science Society.

    Abstract

    Do interlocutors adjust their conversational strategies to the specific contextual demands of a given situation? Prior studies have yielded conflicting results, making it unclear how strategies vary with demands. We combine insights from qualitative and quantitative approaches in a within-participant experimental design involving two different contexts: spontaneously occurring conversations (SOC) and task-oriented conversations (TOC). We systematically assess backchanneling, other-repair and linguistic alignment. We find that SOC exhibit a higher number of backchannels, a reduced and more generic repair format and higher rates of lexical and syntactic alignment. TOC are characterized by a high number of specific repairs and a lower rate of lexical and syntactic alignment. However, when alignment occurs, more linguistic forms are aligned. The findings show that conversational strategies adapt to specific contextual demands.
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dietrich, W., & Drude, S. (2015). Variation in Tupi languages: Genealogy, language change, and typology: Introduction. Boletim do Museu Paraense Emilio Goeldi:Ciencias Humanas, 10, 213-215. doi:10.1590/1981-81222015000200002.
  • Dieuleveut, A., Van Dooren, A., Cournane, A., & Hacquard, V. (2019). Acquiring the force of modals: Sig you guess what sig means? In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 189-202). Sommerville, MA: Cascadilla Press.
  • Dimitrova, D. V., Stowe, L. A., Redeker, G., & Hoeks, J. C. J. (2012). Less is not more: Neural responses to missing and superfluous accents in context. Journal of Cognitive Neuroscience, 24, 2400-2418. doi:10.1162/jocn_a_00302.

    Abstract

    Prosody, particularly accent, aids comprehension by drawing attention to important elements such as the information that answers a question. A study using ERP registration investigated how the brain deals with the interpretation of prosodic prominence. Sentences were embedded in short dialogues and contained accented elements that were congruous or incongruous with respect to a preceding question. In contrast to previous studies, no explicit prosodic judgment task was added. Robust effects of accentuation were evident in the form of an “accent positivity” (200–500 msec) for accented elements irrespective of their congruity. Our results show that incongruously accented elements, that is, superfluous accents, activate a specific set of neural systems that is inactive in case of incongruously unaccented elements, that is, missing accents. Superfluous accents triggered an early positivity around 100 msec poststimulus, followed by a right-lateralized negative effect (N400). This response suggests that redundant information is identified immediately and leads to the activation of a neural system that is associated with semantic processing (N400). No such effects were found when contextually expected accents were missing. In a later time window, both missing and superfluous accents triggered a late positivity on midline electrodes, presumably related to making sense of both kinds of mismatching stimuli. These results challenge previous findings of greater processing for missing accents and suggest that the natural processing of prosody involves a set of distinct, temporally organized neural systems.
  • Dimitrova, D. V., Stowe, L. A., & Hoeks, J. C. (2015). When correction turns positive: Processing corrective prosody in Dutch. PLoS One, 10(5): e0126299. doi:10.1371/journal.pone.0126299.

    Abstract

    Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents) or corrected information (corrective accents), both in single sentences (experiment 1) and after corrective and new information questions (experiment 2). In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch
  • Dingemanse, M., & Enfield, N. J. (2015). Ungeschriebene Gesetze. Gehirn und Geist, 8, 34-39.
  • Dingemanse, M., & Van Leeuwen, T. (2015). Boekoeboekoe is mollig: Taal als samenspel van de zintuigen. Onze Taal, (12), 344-345.
  • Dingemanse, M., Hammond, J., Stehouwer, H., Somasundaram, A., & Drude, S. (2012). A high speed transcription interface for annotating primary linguistic data. In Proceedings of 6th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (pp. 7-12). Stroudsburg, PA: Association for Computational Linguistics.

    Abstract

    We present a new transcription mode for the annotation tool ELAN. This mode is designed to speed up the process of creating transcriptions of primary linguistic data (video and/or audio recordings of linguistic behaviour). We survey the basic transcription workflow of some commonly used tools (Transcriber, BlitzScribe, and ELAN) and describe how the new transcription interface improves on these existing implementations. We describe the design of the transcription interface and explore some further possibilities for improvement in the areas of segmentation and computational enrichment of annotations.
  • Dingemanse, M. (2012). Advances in the cross-linguistic study of ideophones. Language and Linguistics Compass, 6, 654-672. doi:10.1002/lnc3.361.

    Abstract

    Ideophones are marked words that depict sensory imagery found in many of the world’s languages. They are noted for their special forms, distinct grammatical behaviour, rich sensory meanings, and interactional uses related to experience and evidentiality. This review surveys recent developments in ideophone research. Work on the semiotics of ideophones helps explain why they are marked and how they realise the depictive potential of speech. A true semantic typology of ideophone systems is coming within reach through a combination of language-internal analyses and language-independent elicitation tools. Documentation of ideophones in a wide variety of genres as well as sequential analysis of ideophone use in natural discourse leads to new insights about their interactional uses and about their relation to other linguistic devices like reported speech and grammatical evidentials. As the study of ideophones is coming of age, it sheds new light on what is possible and probable in human language.
  • Dingemanse, M., Blasi, D. E., Lupyan, G., Christiansen, M. H., & Monaghan, P. (2015). Arbitrariness, iconicity and systematicity in language. Trends in Cognitive Sciences, 19(10), 603-615. doi:10.1016/j.tics.2015.07.013.

    Abstract

    The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested form to meaning correspondences in the world’s languages. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form to meaning correspondences serve different functions in language processing, development and communication: systematicity facilities category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help explain how these competing motivations shape vocabulary structure.
  • Dingemanse, M. (2015). Ideophones and Reduplication: Depiction, Description, and the Interpretation of Repeated Talk in Discourse. Studies in Language, 39(4), 946-970. doi:10.1075/sl.39.4.05din.

    Abstract

    Repetition is one of the most basic operations on talk, often discussed for its iconic meanings. Ideophones are marked words that depict sensory imagery, often identified by their reduplicated forms. Yet not all reduplication is iconic, and not all ideophones are reduplicated. This paper discusses the semantics and pragmatics of repeated talk (repetition as well as reduplication), with special focus on the intersection of reduplicative processes and ideophonic words. Various formal features of ideophones suggest that it is fruitful to distinguish two modes of representation in language —description and depiction— along with cues like prosodic foregrounding that can steer listeners’ interpretation from one to the other. What is special about reduplication is that it can naturally partake in both of these modes of representation, which is why it is so common in ideophones as well as in other areas of grammar. Using evidence from Siwu, Korean, Semai and a range of other languages, this paper shows how the study of ideophones sheds light on the interpretation of repeated talk and can lead to a more holistic understanding of reduplicative phenomena in language.
  • Dingemanse, M., & Enfield, N. J. (2015). Other-initiated repair across languages: Towards a typology of conversational structures. Open Linguistics, 1, 98-118. doi:10.2478/opli-2014-0007.

    Abstract

    This special issue reports on a cross-linguistic study of other-initiated repair, a domain at the crossroads of language, mind, and social life. Other-initiated repair is part of a system of practices that people use to deal with problems of speaking, hearing and understanding. The contributions in this special issue describe the linguistic resources and interactional practices associated with other-initiated repair in ten different languages. Here we provide an overview of the research methods and the conceptual framework. The empirical base for the project consists of corpora of naturally occurring conversations, collected in fieldsites around the world. Methodologically, we combine qualitative analysis with a comparative-typological perspective, and we formulate principles for the cross-linguistic comparison of conversational structures. A key move, of broad relevance to pragmatic typology, is the recognition that formats for repair initiation form paradigm-like systems that are ultimately language-specific, and that comparison is best done at the level of the constitutive properties of these formats. These properties can be functional (concerning aspects of linguistic formatting) as well as sequential (concerning aspects of the interactional environment). We show how functional and sequential aspects of conversational structure can capture patterns of commonality and diversity in conversational structures within and across languages
  • Dingemanse, M. (2015). Other-initiated repair in Siwu. Open Linguistics, 1, 232-255. doi:10.1515/opli-2015-0001.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with other-initiated repair in Siwu, a Kwa language spoken in eastern Ghana. Other-initiated repair is the set of techniques used by people to deal with problems in speaking, hearing and understanding. Formats for repair initiation in Siwu exploit language-specific resources like question words and noun class morphology. At the same time, the basic structure of the system bears a strong similarity other-initiated repair in other languages. Practices described for Siwu thus are potentially of broader relevance to the study of other-initiated repair. This article documents how different prosodic realisations of repair initiators may index social actions and features of the speech event; how two distinct roles of repetition in repair initiators are kept apart by features of turn design; and what kinds of items can be treated as ‘dispensable’ in resayings. By charting how other-initiated repair uses local linguistic resources and yet is shaped by interactional needs that transcend particular languages, this study contributes to the growing field of pragmatic typology: the study of systems of language use and the principles that shape them
  • Dingemanse, M., & Majid, A. (2012). The semantic structure of sensory vocabulary in an African language. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 300-305). Austin, TX: Cognitive Science Society.

    Abstract

    The widespread occurrence of ideophones, large classes of words specialized in evoking sensory imagery, is little known outside linguistics and anthropology. Ideophones are a common feature in many of the world’s languages but are underdeveloped in English and other Indo-European languages. Here we study the meanings of ideophones in Siwu (a Kwa language from Ghana) using a pile-sorting task. The goal was to uncover the underlying structure of the lexical space and to examine the claimed link between ideophones and perception. We found that Siwu ideophones are principally organized around fine-grained aspects of sensory perception, and map onto salient psychophysical dimensions identified in sensory science. The results ratify ideophones as dedicated sensory vocabulary and underline the relevance of ideophones for research on language and perception.
  • Dingemanse, M., Roberts, S. G., Baranova, J., Blythe, J., Drew, P., Floyd, S., Gisladottir, R. S., Kendrick, K. H., Levinson, S. C., Manrique, E., Rossi, G., & Enfield, N. J. (2015). Universal Principles in the Repair of Communication Problems. PLoS One, 10(9): e0136100. doi:10.1371/journal.pone.0136100.

    Abstract

    There would be little adaptive value in a complex communication system like human language if there were no ways to detect and correct problems. A systematic comparison of conversation in a broad sample of the world’s languages reveals a universal system for the real-time resolution of frequent breakdowns in communication. In a sample of 12 languages of 8 language families of varied typological profiles we find a system of ‘other-initiated repair’, where the recipient of an unclear message can signal trouble and the sender can repair the original message. We find that this system is frequently used (on average about once per 1.4 minutes in any language), and that it has detailed common properties, contrary to assumptions of radical cultural variation. Unrelated languages share the same three functionally distinct types of repair initiator for signalling problems and use them in the same kinds of contexts. People prefer to choose the type that is the most specific possible, a principle that minimizes cost both for the sender being asked to fix the problem and for the dyad as a social unit. Disruption to the conversation is kept to a minimum, with the two-utterance repair sequence being on average no longer that the single utterance which is being fixed. The findings, controlled for historical relationships, situation types and other dependencies, reveal the fundamentally cooperative nature of human communication and offer support for the pragmatic universals hypothesis: while languages may vary in the organization of grammar and meaning, key systems of language use may be largely similar across cultural groups. They also provide a fresh perspective on controversies about the core properties of language, by revealing a common infrastructure for social interaction which may be the universal bedrock upon which linguistic diversity rests.
  • Dittmar, N., Reich, A., Skiba, R., Schumacher, M., & Terborg, H. (1990). Die Erlernung modaler Konzepte des Deutschen durch erwachsene polnische Migranten: Eine empirische Längsschnittstudie. Informationen Deutsch als Fremdsprache: Info DaF, 17(2), 125-172.
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2012). The sound of thickness: Prelinguistic infants' associations of space and pitch. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 306-311). Austin, TX: Cognitive Science Society.

    Abstract

    People often talk about musical pitch in terms of spatial metaphors. In English, for instance, pitches can be high or low, whereas in other languages pitches are described as thick or thin. According to psychophysical studies, metaphors in language can also shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place or does it modify preexisting associations? Here we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings in two preferential looking tasks. Dutch infants looked significantly longer at cross-modally congruent stimuli in both experiments, indicating that infants are sensitive to space-pitch associations prior to language. This early presence of space-pitch mappings suggests that these associations do not originate from language. Rather, language may build upon pre-existing mappings and change them gradually via some form of competitive associative learning.
  • Dolscheid, S., Hunnius, S., & Majid, A. (2015). When high pitches sound low: Children's acquisition of space-pitch metaphors. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 584-598). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2015/papers/0109/index.html.

    Abstract

    Some languages describe musical pitch in terms of spatial height; others in terms of thickness. Differences in pitch metaphors also shape adults’ nonlinguistic space-pitch representations. At the same time, 4-month-old infants have both types of space-pitch mappings available. This tension between prelinguistic space-pitch associations and their subsequent linguistic mediation raises questions about the acquisition of space-pitch metaphors. To address this issue, 5-year-old Dutch children were tested on their linguistic knowledge of pitch metaphors, and nonlinguistic space-pitch associations. Our results suggest 5-year-olds understand height-pitch metaphors in a reversed fashion (high pitch = low). Children displayed good comprehension of a thickness-pitch metaphor, despite its absence in Dutch. In nonlinguistic tasks, however, children did not show consistent space-pitch associations. Overall, pitch representations do not seem to be influenced by linguistic metaphors in 5-year-olds, suggesting that effects of language on musical pitch arise rather late during development.
  • Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.

    Abstract

    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.

    Additional information

    Supporting information
  • Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.

    Abstract

    Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.

    Additional information

    1-s2.0-S1053811919302216-mmc1.docx
  • Drijvers, L., Zaadnoordijk, L., & Dingemanse, M. (2015). Sound-symbolism is disrupted in dyslexia: Implications for the role of cross-modal abstraction processes. In D. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 602-607). Austin, Tx: Cognitive Science Society.

    Abstract

    Research into sound-symbolism has shown that people can
    consistently associate certain pseudo-words with certain referents;
    for instance, pseudo-words with rounded vowels and
    sonorant consonants are linked to round shapes, while pseudowords
    with unrounded vowels and obstruents (with a noncontinuous
    airflow), are associated with sharp shapes. Such
    sound-symbolic associations have been proposed to arise from
    cross-modal abstraction processes. Here we assess the link between
    sound-symbolism and cross-modal abstraction by testing
    dyslexic individuals’ ability to make sound-symbolic associations.
    Dyslexic individuals are known to have deficiencies
    in cross-modal processing. We find that dyslexic individuals
    are impaired in their ability to make sound-symbolic associations
    relative to the controls. Our results shed light on the cognitive
    underpinnings of sound-symbolism by providing novel
    evidence for the role —and disruptability— of cross-modal abstraction
    processes in sound-symbolic eects.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2015). The effect of non-nativeness and background noise on lexical retuning. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    Previous research revealed remarkable flexibility of native and non-native listeners’ perceptual system, i.e., native and non-native phonetic category boundaries can be quickly recalibrated in the face of ambiguous input.
    The present study investigates the limitations of the flexibility of the non-native perceptual system. In two lexically-guided perceptual learning experiments, Dutch listeners were exposed to a short story in English, where either all /l/ or all /ɹ/ sounds were replaced by an ambiguous [l/ɹ] sound. In the first experiment, the story was presented in clean, while in the second experiment, intermittent noise was added to the story, although never on the critical words. Lexically-guided perceptual learning was only observed in the clean condition. It is argued that the introduction of intermittent noise reduced the reliability of the evidence of hearing a particular word, which in turn blocked retuning of the phonetic categories.
  • Drude, S., Awete, W., & Aweti, A. (2019). A ortografia da língua Awetí. LIAMES: Línguas Indígenas Americanas, 19: e019014. doi:10.20396/liames.v19i0.8655746.

    Abstract

    Este trabalho descreve e fundamenta a ortografia da língua Awetí (Tupí, Alto Xingu/mt), com base na análise da estrutura fonológica e gramatical do Awetí. A ortografia é resultado de um longo trabalho colaborativo entre os três autores, iniciado em 1998. Ela não define apenas um alfabeto (a representação das vogais e das consoantes da língua), mas também aborda a variação interna, ressilabificação, lenição, palatalização e outros processos (morfo‑)fonológicos. Tanto a representação escrita da oclusiva glotal, quanto as consequências ortográficas da harmonia nasal receberam uma atenção especial. Apesar de o acento lexical não ser ortograficamente marcado em Awetí, a grande maioria dos afixos e partículas é abordada considerando o acento e sua interação com morfemas adjacentes, ao mesmo tempo determinando as palavras ortográficas. Finalmente foi estabelecida a ordem alfabética em que dígrafos são tratados como sequências de letras, já a oclusiva glotal ⟨ʼ⟩ é ignorada, facilitando o aprendizado do Awetí. A ortografia tal como descrita aqui tem sido usada por aproximadamente dez anos na escola para a alfabetização em Awetí, com bons resultados obtidos. Acreditamos que vários dos argumentos aqui levantados podem ser produtivamente transferidos para outras línguas com fenômenos semelhantes (a oclusiva glotal como consoante, harmonia nasal, assimilação morfo-fonológica, etc.).
  • Drude, S. (2012). [Review of the book O português e o tupi no Brasil by Volker Noll and Wolf Dietrich]. Revista Internacional de Lingüística Iberoamerican, 19, 264-268.
  • Drude, S., Trilsbeek, P., & Broeder, D. (2012). Language Documentation and Digital Humanities: The (DoBeS) Language Archive. In J. C. Meister (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 169-173).

    Abstract

    Overview Since the early nineties, the on-going dramatic loss of the world’s linguistic diversity has gained attention, first by the linguists and increasingly also by the general public. As a response, the new field of language documentation emerged from around 2000 on, starting with the funding initiative ‘Dokumentation Bedrohter Sprachen’ (DoBeS, funded by the Volkswagen foundation, Germany), soon to be followed by others such as the ‘Endangered Languages Documentation Programme’ (ELDP, at SOAS, London), or, in the USA, ‘Electronic Meta-structure for Endangered Languages Documentation’ (EMELD, led by the LinguistList) and ‘Documenting Endangered Languages’ (DEL, by the NSF). From its very beginning, the new field focused on digital technologies not only for recording in audio and video, but also for annotation, lexical databases, corpus building and archiving, among others. This development not just coincides but is intrinsically interconnected with the increasing focus on digital data, technology and methods in all sciences, in particular in the humanities.
  • Drude, S., Broeder, D., Trilsbeek, P., & Wittenburg, P. (2012). The Language Archive: A new hub for language resources. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3264-3267). European Language Resources Association (ELRA).

    Abstract

    This contribution presents “The Language Archive” (TLA), a new unit at the MPI for Psycholinguistics, discussing the current developments in management of scientific data, considering the need for new data research infrastructures. Although several initiatives worldwide in the realm of language resources aim at the integration, preservation and mobilization of research data, the state of such scientific data is still often problematic. Data are often not well organized and archived and not described by metadata ― even unique data such as field-work observational data on endangered languages is still mostly on perishable carriers. New data centres are needed that provide trusted, quality-reviewed, persistent services and suitable tools and that take legal and ethical issues seriously. The CLARIN initiative has established criteria for suitable centres. TLA is in a good position to be one of such centres. It is based on three essential pillars: (1) A data archive; (2) management, access and annotation tools; (3) archiving and software expertise for collaborative projects. The archive hosts mostly observational data on small languages worldwide and language acquisition data, but also data resulting from experiments
  • Duarri, A., Meng-Chin, A. L., Fokkens, M. R., Meijer, M., Smeets, C. J. L. M., Nibbeling, E. A. R., Boddeke, E., Sinke, R. J., Kampinga, H. H., Papazian, D. M., & Verbeek, D. S. (2015). Spinocerebellar ataxia type 19/22 mutations alter heterocomplex Kv4.3 channel function and gating in a dominant manner. Cellular and Molecular Life Sciences, 72(17), 3387-3399. doi:10.1007/s00018-015-1894-2.

    Abstract

    The dominantly inherited cerebellar ataxias are a heterogeneous group of neurodegenerative disorders caused by Purkinje cell loss in the cerebellum. Recently, we identified loss-of-function mutations in the KCND3 gene as the cause of spinocerebellar ataxia type 19/22 (SCA19/22), revealing a previously unknown role for the voltage-gated potassium channel, Kv4.3, in Purkinje cell survival. However, how mutant Kv4.3 affects wild-type Kv4.3 channel functioning remains unknown. We provide evidence that SCA19/22-mutant Kv4.3 exerts a dominant negative effect on the trafficking and surface expression of wild-type Kv4.3 in the absence of its regulatory subunit, KChIP2. Notably, this dominant negative effect can be rescued by the presence of KChIP2. We also found that all SCA19/22-mutant subunits either suppress wild-type Kv4.3 current amplitude or alter channel gating in a dominant manner. Our findings suggest that altered Kv4.3 channel localization and/or functioning resulting from SCA19/22 mutations may lead to Purkinje cell loss, neurodegeneration and ataxia.
  • Dunbar, R., Baron, R., Frangou, A., Peirce, E., Van Leeuwen, E. J. C., Stow, J., Partridge, G., MacDonald, I., Barra, V., & Van Vugt, M. (2012). Social laughter is correlated with an elevated pain threshold. Proceedings of the Royal Society of London/B, 279, 1161-1167. doi:10.1098/rspb.2011.1373.

    Abstract

    Although laughter forms an important part of human non-verbal communication, it has received rather less attention than it deserves in both the experimental and the observational literatures. Relaxed social (Duchenne) laughter is associated with feelings of wellbeing and heightened affect, a proximate expla- nation for which might be the release of endorphins. We tested this hypothesis in a series of six experimental studies in both the laboratory (watching videos) and naturalistic contexts (watching stage performances), using change in pain threshold as an assay for endorphin release. The results show that pain thresholds are significantly higher after laughter than in the control condition. This pain-tolerance effect is due to laughter itself and not simply due to a change in positive affect. We suggest that laughter, through an endorphin-mediated opiate effect, may play a crucial role in social bonding.

    Additional information

    Dunbar_et_al_suppl_material.doc
  • Dunn, M., & Terrill, A. (2012). Assessing the evidence for a Central Solomons Papuan family using the Oswalt Monte Carlo Test. Diachronica, 29(1), 1-27. doi:10.1075/dia.29.1.01dun.

    Abstract

    In the absence of comparative method reconstruction, high rate of lexical cognate candidates is often used as evidence for relationships between languages. This paper uses the Oswalt Monte Carlo Shift test (a variant of Oswalt 1970) to explore the statistical basis of the claim that the four Papuan languages of the Solomon Islands have greater than chance levels of lexical similarity. The results of this test initially appear to show that the lexical similarities between the Central Solomons Papuan languages are statistically significant, but the effect disappears when known Oceanic loanwords are removed. The Oswalt Monte Carlo test is a useful technique to test a claim of greater than chance similarity between any two word lists — with the proviso that undetected loanwords strongly increase the chance of spurious identification.
  • Dunn, M. (2012). [Review of the book "The Dene-Yeniseian connection", ed. by J. Kari and B.A. Potter]. Language, 88(2), 429-431. doi:10.1353/lan.2012.0036.

    Abstract

    A review of "Anthropological papers of the University of Alaska: The Dene-Yeniseian connection." Ed. by JAMES KARI and BEN A. POTTER Fairbanks, AK: University of Alaska, Fairbanks. 2010. Pp. vi, 363.
  • Eerland, A., Guadalupe, T., & Zwaan, R. A. (2012). Posture as index for approach-avoidance behavior. PLoS One, 7(2), e31291. doi:10.1371/journal.pone.0031291.

    Abstract

    Approach and avoidance are two behavioral responses that make people tend to approach positive and avoid negative situations. This study examines whether postural behavior is influenced by the affective state of pictures. While standing on the Wii™ Balance Board, participants viewed pleasant, neutral, and unpleasant pictures (passively viewing phase). Then they had to move their body to the left or the right (lateral movement phase) to make the next picture appear. We recorded movements in the anterior-posterior direction to examine approach and avoidant behavior. During passively viewing, people approached pleasant pictures. They avoided unpleasant ones while they made a lateral movement. These findings provide support for the idea that we tend to approach positive and avoid negative situations.
  • Eijk, L., Ernestus, M., & Schriefers, H. (2019). Alignment of pitch and articulation rate. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 2690-2694). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Previous studies have shown that speakers align their speech to each other at multiple linguistic levels. This study investigates whether alignment is mostly the result of priming from the immediately preceding
    speech materials, focussing on pitch and articulation rate (AR). Native Dutch speakers completed sentences, first by themselves (pre-test), then in alternation with Confederate 1 (Round 1), with Confederate 2 (Round 2), with Confederate 1 again
    (Round 3), and lastly by themselves again (post-test). Results indicate that participants aligned to the confederates and that this alignment lasted during the post-test. The confederates’ directly preceding sentences were not good predictors for the participants’ pitch and AR. Overall, the results indicate that alignment is more of a global effect than a local priming effect.
  • Eising, E., Carrion Castillo, A., Vino, A., Strand, E. A., Jakielski, K. J., Scerri, T. S., Hildebrand, M. S., Webster, R., Ma, A., Mazoyer, B., Francks, C., Bahlo, M., Scheffer, I. E., Morgan, A. T., Shriberg, L. D., & Fisher, S. E. (2019). A set of regulatory genes co-expressed in embryonic human brain is implicated in disrupted speech development. Molecular Psychiatry, 24, 1065-1078. doi:10.1038/s41380-018-0020-x.

    Abstract

    Genetic investigations of people with impaired development of spoken language provide windows into key aspects of human biology. Over 15 years after FOXP2 was identified, most speech and language impairments remain unexplained at the molecular level. We sequenced whole genomes of nineteen unrelated individuals diagnosed with childhood apraxia of speech, a rare disorder enriched for causative mutations of large effect. Where DNA was available from unaffected parents, we discovered de novo mutations, implicating genes, including CHD3, SETD1A and WDR5. In other probands, we identified novel loss-of-function variants affecting KAT6A, SETBP1, ZFHX4, TNRC6B and MKL2, regulatory genes with links to neurodevelopment. Several of the new candidates interact with each other or with known speech-related genes. Moreover, they show significant clustering within a single co-expression module of genes highly expressed during early human brain development. This study highlights gene regulatory pathways in the developing brain that may contribute to acquisition of proficient speech.

    Additional information

    Eising_etal_2018sup.pdf
  • Eisner, F. (2012). Competition in the acoustic encoding of emotional speech. In L. McCrohon (Ed.), Five approaches to language evolution. Proceedings of the workshops of the 9th International Conference on the Evolution of Language (pp. 43-44). Tokyo: Evolang9 Organizing Committee.

    Abstract

    1. Introduction Speech conveys not only linguistic meaning but also paralinguistic information, such as features of the speaker’s social background, physiology, and emotional state. Linguistic and paralinguistic information is encoded in speech by using largely the same vocal apparatus and both are transmitted simultaneously in the acoustic signal, drawing on a limited set of acoustic cues. How this simultaneous encoding is achieved, how the different types of information are disentangled by the listener, and how much they interfere with one another is presently not well understood. Previous research has highlighted the importance of acoustic source and filter cues for emotion and linguistic encoding respectively, which may suggest that the two types of information are encoded independently of each other. However, those lines of investigation have been almost completely disconnected (Murray & Arnott, 1993).
  • Elbers, W., Broeder, D., & Van Uytvanck, D. (2012). Proper language resource centers. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 3260-3263). European Language Resources Association (ELRA).

    Abstract

    Language resource centers allow researchers to reliably deposit their structured data together with associated meta data and run services operating on this deposited data. We are looking into possibilities to create long-term persistency of both the deposited data and the services operating on this data. Challenges, both technical and non-technical, that need to be solved are the need to replicate more than just the data, proper identification of the digital objects in a distributed environment by making use of persistent identifiers and the set-up of a proper authentication and authorization domain including the management of the authorization information on the digital objects. We acknowledge the investment that most language resource centers have made in their current infrastructure. Therefore one of the most important requirements is the loose coupling with existing infrastructures without the need to make many changes. This shift from a single language resource center into a federated environment of many language resource centers is discussed in the context of a real world center: The Language Archive supported by the Max Planck Institute for Psycholinguistics.
  • Ellis-Davies, K., Sakkalou, E., Fowler, N., Hilbrink, E., & Gattis, M. (2012). CUE: The continuous unified electronic diary method. Behavior Research Methods, 44, 1063-1078. doi:10.3758/s13428-012-0205-1.

    Abstract

    In the present article, we introduce the continuous unified electronic (CUE) diary method, a longitudinal, event-based, electronic parent report method that allows real-time recording of infant and child behavior in natural contexts. Thirty-nine expectant mothers were trained to identify and record target behaviors into programmed handheld computers. From birth to 18 months, maternal reporters recorded the initial, second, and third occurrences of seven target motor behaviors: palmar grasp, rolls from side to back, reaching when sitting, pincer grip, crawling, walking, and climbing stairs. Compliance was assessed as two valid entries per behavior: 97 % of maternal reporters met compliance criteria. Reliability was assessed by comparing diary entries with researcher assessments for three of the motor behaviors: palmar grasp, pincer grip and walking. A total of 81 % of maternal reporters met reliability criteria. For those three target behaviors, age of emergence was compared across data from the CUE diary method and researcher assessments. The CUE diary method was found to detect behaviors earlier and with greater sensitivity to individual differences. The CUE diary method is shown to be a reliable methodological tool for studying processes of change in human development.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2012). Diversity disregarded [Review of the book Games primates play: An undercover investigation of the evolution and economics of human relationships by Dario Maestripieri]. Science, 337, 1295-1296. doi:10.1126/science.1225365.
  • Enfield, N. J., & Sidnell, J. (2012). Collateral effects, agency, and systems of language use [Reply to commentators]. Current Anthropology, 53(3), 327-329.
  • Enfield, N. J. (2012). [Review of the book "Language, culture, and mind: Natural constructions and social kinds", by Paul Kockelman]. Language in Society, 41(5), 674-677. doi:10.1017/S004740451200070X.
  • Enfield, N. J. (2004). Areal grammaticalisation of postverbal 'acquire' in mainland Southeast Asia. In S. Burusphat (Ed.), Proceedings of the 11th Southeast Asia Linguistics Society Meeting (pp. 275-296). Arizona State University: Tempe.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (2012). Language innateness [Letter to the Editor]. The Times Literary Supplement, October 26, 2012(5717), 6.
  • Enfield, N. J. (2015). Linguistic relativity from reference to agency. Annual Review of Anthropology, 44, 207-224. doi:10.1146/annurev-anthro-102214-014053.

    Abstract

    How are language, thought, and reality related? Interdisciplinary research on this question over the past two decades has made significant progress. Most of the work has been Neo-Whorfian in two senses: One, it has been driven by research questions that were articulated most explicitly and most famously by the linguistic anthropologist Benjamin Lee Whorf, and two, it has limited the scope of inquiry to Whorf's narrow interpretations of the key terms “language,” “thought,” and “reality.” This article first reviews some of the ideas and results of Neo-Whorfian work, concentrating on the special role of linguistic categorization in heuristic decision making. It then considers new and potential directions in work on linguistic relativity, taken broadly to mean the ways in which the perspective offered by a given language can affect thought (or mind) and reality. New lines of work must reconsider the idea of linguistic relativity by exploring the range of available interpretations of the key terms: in particular, “language” beyond reference, “thought” beyond nonsocial processing, and “reality” beyond brute, nonsocial facts.
  • Enfield, N. J. (2015). Other-initiated repair in Lao. Open linguistics, 1(1), 119-144. doi:10.2478/opli-2014-0006.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversation in the Lao language (a Southwestern Tai language spoken in Laos, Thailand, and Cambodia). The article reports findings specific to the Lao language from the comparative project that is the topic of this special issue. While the scope is general to the overall pattern of other-initiated repair as a set of practices and a system of semiotic resources, special attention is given to (1) the range of repair operations that are elicited by open other-initiators of repair in Lao, especially the subtle changes made when problem turns are repeated, and (2) the use of phrase-final particles—a characteristic feature of Lao grammar—in the marking of both other-initiations of repair and repair solution turns
  • Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.

    Abstract

    How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
  • Enfield, N. J. (2012). The slow explosion of speech [Review of the book The origins of Grammar by James R. Hurford]. The Times Literary Supplement, March 30, 2012(5687), 11-12. Retrieved from http://www.the-tls.co.uk/tls/public/article1004404.ece.

    Abstract

    Book review of James R. Hurford THE ORIGINS OF GRAMMAR 791pp. Oxford University Press. ISBN 978 0 19 920787 9
  • Erard, M. (2015). What's in a name? Science, 347(6225), 941-943. doi:10.1126/science.347.6225.941.
  • Erb, J., Henry, M. J., Eisner, F., & Obleser, J. (2012). Auditory skills and brain morphology predict individual differences in adaptation to degraded speech. Neuropsychologia, 50, 2154-2164. doi:10.1016/j.neuropsychologia.2012.05.013.

    Abstract

    Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesized that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested eighteen normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centered on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech.
  • Ernestus, M., & Cutler, A. (2015). BALDEY: A database of auditory lexical decisions. Quarterly Journal of Experimental Psychology, 68, 1469-1488. doi:10.1080/17470218.2014.984730.

    Abstract

    In an auditory lexical decision experiment, 5,541 spoken content words and pseudo-words were presented to 20 native speakers of Dutch. The words vary in phonological makeup and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudo-words were matched in these respects to the real words. The BALDEY data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbors, and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles and frequency ratings by 70 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Ernestus, M., Hanique, I., & Verboom, E. (2015). The effect of speech situation on the occurrence of reduced word pronunciation variants. Journal of Phonetics, 48, 60-75. doi:10.1016/j.wocn.2014.08.001.

    Abstract

    This article presents two studies investigating how the situation in which speech is uttered affects the frequency with which words are reduced. Study 1 is based on the Spoken Dutch Corpus, which consists of 15 components, nearly all representing a different speech situation. This study shows that the components differ in how often ten semantically weak words are highly reduced. The differences are especially large between the components with scripted and unscripted speech. Within the component group of unscripted speech, the formality of the situation shows an effect. Study 2 investigated segment reduction in a shadowing experiment in which participants repeated Dutch carefully and casually articulated sentences. Prefixal schwa and suffixal /t/ were absent in participants' responses to both sentences types as often as in formal interviews. If a segment was absent, this appeared to be mostly due to extreme co-articulation, unlike in speech produced in less formal situations. Speakers thus adapted more to the formal situation of the experiment than to the stimuli to be shadowed. We conclude that speech situation affects the occurrence of reduced word pronunciation variants, which should be accounted for by psycholinguistic models of speech production and comprehension
  • Escudero, P., Simon, E., & Mitterer, H. (2012). The perception of English front vowels by North Holland and Flemish listeners: Acoustic similarity predicts and explains cross-linguistic and L2 perception. Journal of Phonetics, 40, 280-288. doi:10.1016/j.wocn.2011.11.004.

    Abstract

    We investigated whether regional differences in the native language (L1) influence the perception of second language (L2) sounds. Many cross-language and L2 perception studies have assumed that the degree of acoustic similarity between L1 and L2 sounds predicts cross-linguistic and L2 performance. The present study tests this assumption by examining the perception of the English contrast between /e{open}/ and /æ/ in native speakers of Dutch spoken in North Holland (the Netherlands) and in East- and West-Flanders (Belgium). A Linear Discriminant Analysis on acoustic data from both dialects showed that their differences in vowel production, as reported in and Adank, van Hout, and Van de Velde (2007), should influence the perception of the L2 vowels if listeners focus on the vowels' acoustic/auditory properties. Indeed, the results of categorization tasks with Dutch or English vowels as response options showed that the two listener groups differed as predicted by the discriminant analysis. Moreover, the results of the English categorization task revealed that both groups of Dutch listeners displayed the asymmetric pattern found in previous word recognition studies, i.e. English /æ/ was more frequently confused with English /e{open}/ than the reverse. This suggests a strong link between previous L2 word learning results and the present L2 perceptual assimilation patterns.
  • Esling, J. H., Benner, A., & Moisik, S. R. (2015). Laryngeal articulatory function and speech origins. In H. Little (Ed.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015) Satellite Event: The Evolution of Phonetic Capabilities: Causes constraints, consequences (pp. 2-7). Glasgow: ICPhS.

    Abstract

    The larynx is the essential articulatory mechanism that primes the vocal tract. Far from being only a glottal source of voicing, the complex laryngeal mechanism entrains the ontogenetic acquisition of speech and, through coarticulatory coupling, guides the production of oral sounds in the infant vocal tract. As such, it is not possible to speculate as to the origins of the speaking modality in humans without considering the fundamental role played by the laryngeal articulatory mechanism. The Laryngeal Articulator Model, which divides the vocal tract into a laryngeal component and an oral component, serves as a basis for describing early infant speech and for positing how speech sounds evolving in various hominids may be related phonetically. To this end, we offer some suggestions for how the evolution and development of vocal tract anatomy fit with our infant speech acquisition data and discuss the implications this has for explaining phonetic learning and for interpreting the biological evolution of the human vocal tract in relation to speech and speech acquisition.
  • Estruch, S. B., Buzon, V., Carbo, L. R., Schorova, L., Luders, J., & Estebanez-Perpina, E. (2012). The oncoprotein BCL11A binds to Orphan Nuclear Receptor TLX and potentiates its transrepressive function. PLoS One, 7(6): e37963. doi:10.1371/journal.pone.0037963.

    Abstract

    Nuclear orphan receptor TLX (NR2E1) functions primarily as a transcriptional repressor and its pivotal role in brain development, glioblastoma, mental retardation and retinopathologies make it an attractive drug target. TLX is expressed in the neural stem cells (NSCs) of the subventricular zone and the hippocampus subgranular zone, regions with persistent neurogenesis in the adult brain, and functions as an essential regulator of NSCs maintenance and self-renewal. Little is known about the TLX social network of interactors and only few TLX coregulators are described. To identify and characterize novel TLX-binders and possible coregulators, we performed yeast-two-hybrid (Y2H) screens of a human adult brain cDNA library using different TLX constructs as baits. Our screens identified multiple clones of Atrophin-1 (ATN1), a previously described TLX interactor. In addition, we identified an interaction with the oncoprotein and zinc finger transcription factor BCL11A (CTIP1/Evi9), a key player in the hematopoietic system and in major blood-related malignancies. This interaction was validated by expression and coimmunoprecipitation in human cells. BCL11A potentiated the transrepressive function of TLX in an in vitro reporter gene assay. Our work suggests that BCL11A is a novel TLX coregulator that might be involved in TLX-dependent gene regulation in the brain.
  • Everett, C., Blasi, D. E., & Roberts, S. G. (2015). Climate, vocal folds, and tonal languages: Connecting the physiological and geographic dots. Proceedings of the National Academy of Sciences of the United States of America, 112, 1322-1327. doi:10.1073/pnas.1417413112.

    Abstract

    We summarize a number of findings in laryngology demonstrating that perturbations of phonation, including increased jitter and shimmer, are associated with desiccated ambient air. We predict that, given the relative imprecision of vocal fold vibration in desiccated versus humid contexts, arid and cold ecologies should be less amenable, when contrasted to warm and humid ecologies, to the development of languages with phonemic tone, especially complex tone. This prediction is supported by data from two large independently coded databases representing 3,700+ languages. Languages with complex tonality have generally not developed in very cold or otherwise desiccated climates, in accordance with the physiologically based predictions. The predicted global geographic–linguistic association is shown to operate within continents, within major language families, and across language isolates. Our results offer evidence that human sound systems are influenced by environmental factors.
  • Fahrenfort, J. J., Snijders, T. M., Heinen, K., van Gaal, S., & Scholte, H. S. (2012). Neuronal integration in visual cortex elevates face category tuning to conscious face perception. Proceedings of the National Academy of Sciences of the United States of America, 109(52), 21504-21509. doi:10.1073/pnas.1207414110.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.

    Abstract

    The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition.
  • Fawcett, C., & Liszkowski, U. (2012). Infants anticipate others’ social preferences. Infant and Child Development, 21, 239-249. doi:10.1002/icd.739.

    Abstract

    In the current eye-tracking study, we explored whether 12-month-old infants can predict others' social preferences. We showed infants scenes in which two characters alternately helped or hindered an agent in his goal of climbing a hill. In a control condition, the two characters moved up and down the hill in identical ways to the helper and hinderer but did not make contact with the agent; thus, they did not cause him to reach or not reach her or his goal. Following six alternating familiarization trials of helping and hindering interactions (help-hinder condition) or up and down interactions (up-down condition), infants were shown one test trial in which they could visually anticipate the agent approaching one of the two characters. As predicted, infants in the help-hinder condition made significantly more visual anticipations toward the helping than hindering character, suggesting that they predicted the agent to approach the helping character. In contrast, infants revealed no difference in visual anticipations between the up and down characters. The up-down condition served to control for low-level perceptual explanations of the results for the help-hinder condition. Thus, together the results reveal that 12-month-old infants make predictions about others' behaviour and social preferences from a third-party perspective.
  • Fawcett, C., & Liszkowski, U. (2012). Mimicry and play initiation in 18-month-old infants. Infant Behavior and Development, 35, 689-696. doi:10.1016/j.infbeh.2012.07.014.

    Abstract

    Across two experiments, we examined the relationship between 18-month-old infants’ mimicry and social behavior – particularly invitations to play with an adult play partner. In Experiment 1, we manipulated whether an adult mimicked the infant's play or not during an initial play phase. We found that infants who had been mimicked were subsequently more likely to invite the adult to join their play with a new toy. In addition, they reenacted marginally more steps from a social learning demonstration she gave. In Experiment 2, infants had the chance to spontaneously mimic the adult during the play phase. Complementing Experiment 1, those infants who spent more time mimicking the adult were more likely to invite her to play with a new toy. This effect was specific to play and not apparent in other communicative acts, such as directing the adult's attention to an event or requesting toys. Together, the results suggest that infants use mimicry as a tool to establish social connections with others and that mimicry has specific influences on social behaviors related to initiating subsequent joint interactions.
  • Fawcett, C., & Liszkowski, U. (2012). Observation and initiation of joint action in infants. Child Development, 83, 434-441. doi:10.1111/j.1467-8624.2011.01717.x.

    Abstract

    Infants imitate others’ individual actions, but do they also replicate others’ joint activities? To examine whether observing joint action influences infants’ initiation of joint action, forty-eight 18-month-old infants observed object demonstrations by 2 models acting together (joint action), 2 models acting individually (individual action), or 1 model acting alone (solitary action). Infants’ behavior was examined after they were given each object. Infants in the joint action condition attempted to initiate joint action more often than infants in the other conditions, yet they were equally likely to communicate for other reasons and to imitate the demonstrated object-directed actions. The findings suggest that infants learn to replicate others’ joint activity through observation, an important skill for cultural transmission of shared practices.

Share this page