Publications

Displaying 101 - 200 of 1209
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., Nava, M., & Declerck, T. (2004). INTERA - a Distributed Domain of Metadata Resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Spoken Language Resources and Evaluation (LREC 2004) (pp. 369-372). Paris: European Language Resources Association.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broeder, D., Declerck, T., Hinrichs, E., Piperidis, S., Romary, L., Calzolari, N., & Wittenburg, P. (2008). Foundation of a component-based flexible registry for language resources and technology. In N. Calzorali (Ed.), Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008) (pp. 1433-1436). European Language Resources Association (ELRA).

    Abstract

    Within the CLARIN e-science infrastructure project it is foreseen to develop a component-based registry for metadata for Language Resources and Language Technology. With this registry it is hoped to overcome the problems of the current available systems with respect to inflexible fixed schema, unsuitable terminology and interoperability problems. The registry will address interoperability needs by refering to a shared vocabulary registered in data category registries as they are suggested by ISO.
  • Broeder, D., Auer, E., Kemps-Snijders, M., Sloetjes, H., Wittenburg, P., & Zinn, C. (2008). Managing very large multimedia archives and their integration into federations. In P. Manghi, P. Pagano, & P. Zezula (Eds.), First Workshop in Very Large Digital Libraries (VLDL 2008).
  • Broersma, M., & Cutler, A. (2008). Phantom word activation in L2. System, 36(1), 22-34. doi:10.1016/j.system.2007.11.003.

    Abstract

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two studies illustrating this. First, in a lexical decision study, L2 listeners accepted (but L1 listeners did not accept) spoken non-words such as groof or flide as real English words. Second, a priming study demonstrated that the same spoken non-words made recognition of the real words groove, flight easier for L2 (but not L1) listeners, suggesting that, for the L2 listeners only, these real words had been activated by the spoken non-word input. We propose that further understanding of the activation and competition process in L2 lexical processing could lead to new understanding of L2 listening difficulty.
  • Broersma, M. (2008). Flexible cue use in nonnative phonetic categorization (L). Journal of the Acoustical Society of America, 124(2), 712-715. doi:10.1121/1.2940578.

    Abstract

    Native and nonnative listeners categorized final /v/ versus /f/ in English nonwords. Fricatives followed phonetically long originally /v/-preceding or short originally /f/-preceding vowels. Vowel duration was constant for each participant and sometimes mismatched other voicing cues. Previous results showed that English but not Dutch listeners whose L1 has no final voicing contrast nevertheless used the misleading vowel duration for /v/-/f/ categorization. New analyses showed that Dutch listeners did use vowel duration initially, but quickly reduced its use, whereas the English listeners used it consistently throughout the experiment. Thus, nonnative listeners adapted to the stimuli more flexibly than native listeners did.
  • Broersma, M., & Kolkman, K. M. (2004). Lexical representation of non-native phonemes. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1241-1244). Seoul: Sunjijn Printing Co.
  • Brouwer, S. (2013). Continuous recognition memory for spoken words in noise. Proceedings of Meetings on Acoustics, 19: 060117. doi:10.1121/1.4798781.

    Abstract

    Previous research has shown that talker variability affects recognition memory for spoken words (Palmeri et al., 1993). This study examines whether additive noise is similarly retained in memory for spoken words. In a continuous recognition memory task, participants listened to a list of spoken words mixed with noise consisting of a pure tone or of high-pass filtered white noise. The noise and speech were in non-overlapping frequency bands. In Experiment 1, listeners indicated whether each spoken word in the list was OLD (heard before in the list) or NEW. Results showed that listeners were as accurate and as fast at recognizing a word as old if it was repeated with the same or different noise. In Experiment 2, listeners also indicated whether words judged as OLD were repeated with the same or with a different type of noise. Results showed that listeners benefitted from hearing words presented with the same versus different noise. These data suggest that spoken words and temporally-overlapping but spectrally non-overlapping noise are retained or reconstructed together for explicit, but not for implicit recognition memory. This indicates that the extent to which noise variability is retained seems to depend on the depth of processing
  • Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.

    Abstract

    In two eye-tracking experiments we examined whether wider discourse information helps
    the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
    canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
    sentences from a casual speech corpus containing canonical and reduced target words. Target
    word recognition was assessed by measuring eye fixation proportions to four printed words
    on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
    and an unrelated distractor. Target sentences were presented in isolation or with a wider
    discourse context. Experiment 1 revealed that target recognition was facilitated by wider
    discourse information. Importantly, the recognition of reduced forms improved significantly
    when preceded by strongly rather than by weakly supportive discourse contexts. This was not
    the case for canonical forms: listeners' target word recognition was not dependent on the
    degree of supportive context. Experiment 2 showed that the differential context effects in
    Experiment 1 were not due to an additional amount of speaker information. Thus, these data
    suggest that in natural settings a strongly supportive discourse context is more important for
    the recognition of reduced forms than the recognition of canonical forms.
  • Brown, P. (2008). Up, down, and across the land: Landscape terms and place names in Tzeltal. Language Sciences, 30(2/3), 151-181. doi:10.1016/j.langsci.2006.12.003.

    Abstract

    The Tzeltal language is spoken in a mountainous region of southern Mexico by some 280,000 Mayan corn farmers. This paper focuses on landscape and place vocabulary in the Tzeltal municipio of Tenejapa, where speakers use an absolute system of spatial reckoning based on the overall uphill (southward)/downhill (northward) slope of the land. The paper examines the formal and functional properties of the Tenejapa Tzeltal vocabulary labelling features of the local landscape and relates it to spatial vocabulary for describing locative relations, including the uphill/downhill axis for spatial reckoning as well as body part terms for specifying parts of locative grounds. I then examine the local place names, discuss their semantic and morphosyntactic properties, and relate them to the landscape vocabulary, to spatial vocabulary, and also to cultural narratives about events associated with particular places. I conclude with some observations on the determinants of landscape and place terminology in Tzeltal, and what this vocabulary and how it is used reveal about the conceptualization of landscape and places.
  • Brown, A., & Gullberg, M. (2008). Bidirectional crosslinguistic influence in L1-L2 encoding of manner in speech and gesture. Studies in Second Language Acquisition, 30(2), 225-251. doi:10.1017/S0272263108080327.

    Abstract

    Whereas most research in SLA assumes the relationship between the first language (L1) and the second language (L2) to be unidirectional, this study investigates the possibility of a bidirectional relationship. We examine the domain of manner of motion, in which monolingual Japanese and English speakers differ both in speech and gesture. Parallel influences of the L1 on the L2 and the L2 on the L1 were found in production from native Japanese speakers with intermediate knowledge of English. These effects, which were strongest in gesture patterns, demonstrate that (a) bidirectional interaction between languages in the multilingual mind can occur even with intermediate proficiency in the L2 and (b) gesture analyses can offer insights on interactions between languages beyond those observed through analyses of speech alone.
  • Brown, A. (2008). Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker. Gesture, 8(2), 256-276. doi:10.1075/gest.8.2.08bro.

    Abstract

    Abundant evidence across languages, structures, proficiencies, and modalities shows that properties of first languages influence performance in second languages. This paper presents an alternative perspective on the interaction between established and emerging languages within second language speakers by arguing that an L2 can influence an L1, even at relatively low proficiency levels. Analyses of the gesture viewpoint employed in English and Japanese descriptions of motion events revealed systematic between-language and within-language differences. Monolingual Japanese speakers used significantly more Character Viewpoint than monolingual English speakers, who predominantly employed Observer Viewpoint. In their L1 and their L2, however, native Japanese speakers with intermediate knowledge of English patterned more like the monolingual English speakers than their monolingual Japanese counterparts. After controlling for effects of cultural exposure, these results offer valuable insights into both the nature of cross-linguistic interactions within individuals and potential factors underlying gesture viewpoint.
  • Brown, A., & Gullberg, M. (2013). L1–L2 convergence in clausal packaging in Japanese and English. Bilingualism: Language and Cognition, 16, 477-494. doi:10.1017/S1366728912000491.

    Abstract

    This research received technical and financial support from Syracuse University, the Max Planck Institute for Psycholinguistics, and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56-384, The Dynamics of Multilingual Processing, awarded to Marianne Gullberg and Peter Indefrey).
  • Brown-Schmidt, S., & Konopka, A. E. (2008). Little houses and casas pequenas: Message formulation and syntactic form in unscripted speech with speakers of English and Spanish. Cognition, 109(2), 274-280. doi:10.1016/j.cognition.2008.07.011.

    Abstract

    During unscripted speech, speakers coordinate the formulation of pre-linguistic messages with the linguistic processes that implement those messages into speech. We examine the process of constructing a contextually appropriate message and interfacing that message with utterance planning in English (the small butterfly) and Spanish (la mariposa pequeña) during an unscripted, interactive task. The coordination of gaze and speech during formulation of these messages is used to evaluate two hypotheses regarding the lower limit on the size of message planning units, namely whether messages are planned in units isomorphous to entire phrases or units isomorphous to single lexical items. Comparing the planning of fluent pre-nominal adjectives in English and post-nominal adjectives in Spanish showed that size information is added to the message later in Spanish than English, suggesting that speakers can prepare pre-linguistic messages in lexically-sized units. The results also suggest that the speaker can use disfluency to coordinate the transition from thought to speech.
  • Bruggeman, L., & Cutler, A. (2019). The dynamics of lexical activation and competition in bilinguals’ first versus second language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1342-1346). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Speech input causes listeners to activate multiple
    candidate words which then compete with one
    another. These include onset competitors, that share a
    beginning (bumper, butter), but also, counterintuitively,
    rhyme competitors, sharing an ending
    (bumper, jumper). In L1, competition is typically
    stronger for onset than for rhyme. In L2, onset
    competition has been attested but rhyme competition
    has heretofore remained largely unexamined. We
    assessed L1 (Dutch) and L2 (English) word
    recognition by the same late-bilingual individuals. In
    each language, eye gaze was recorded as listeners
    heard sentences and viewed sets of drawings: three
    unrelated, one depicting an onset or rhyme competitor
    of a word in the input. Activation patterns revealed
    substantial onset competition but no significant
    rhyme competition in either L1 or L2. Rhyme
    competition may thus be a “luxury” feature of
    maximally efficient listening, to be abandoned when
    resources are scarcer, as in listening by late
    bilinguals, in either language.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • Brugman, H., Malaisé, V., & Hollink, L. (2008). A common multimedia annotation framework for cross linking cultural heritage digital collections. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each other’s annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions
  • Brugman, H., Crasborn, O., & Russel, A. (2004). Collaborative annotation of sign language data with Peer-to-Peer technology. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 213-216). Paris: European Language Resources Association.
  • Brugman, H., & Russel, A. (2004). Annotating Multi-media/Multi-modal resources with ELAN. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Language Evaluation (LREC 2004) (pp. 2065-2068). Paris: European Language Resources Association.
  • Buetti, S., Tamietto, M., Hervais-Adelman, A., Kerzel, D., de Gelder, B., & Pegna, A. J. (2013). Dissociation between goal-directed and discrete response localization in a patient with bilateral cortical blindness. Journal of Cognitive Neuroscience, 25(10), 1769-1775. doi:10.1162/jocn_a_00404.

    Abstract

    We investigated localization performance of simple targets in patient TN, who suffered bilateral damage of his primary visual cortex and shows complete cortical blindness. Using a two-alternative forced-choice paradigm, TN was asked to guess the position of left-right targets with goal-directed and discrete manual responses. The results indicate a clear dissociation between goal-directed and discrete responses. TN pointed toward the correct target location in approximately 75% of the trials but was at chance level with discrete responses. This indicates that the residual ability to localize an unseen stimulus depends critically on the possibility to translate a visual signal into a goal-directed motor output at least in certain forms of blindsight.
  • Burenhult, N. (2008). Spatial coordinate systems in demonstrative meaning. Linguistic Typology, 12(1), 99-142. doi:10.1515/LITY.2008.032.

    Abstract

    Exploring the semantic encoding of a group of crosslinguistically uncommon “spatial-coordinate demonstratives”, this work establishes the existence of demonstratives whose function is to project angular search domains, thus invoking proper coordinate systems (or “frames of reference”). What is special about these distinctions is that they rely on a spatial asymmetry in relativizing a demonstrative referent (representing the Figure) to the deictic center (representing the Ground). A semantic typology of such demonstratives is constructed based on the nature of the asymmetries they employ. A major distinction is proposed between asymmetries outside the deictic Figure-Ground array (e.g., features of the larger environment) and those within it (e.g., facets of the speaker/addressee dyad). A unique system of the latter type, present in Jahai, an Aslian (Mon-Khmer) language spoken by groups of hunter-gatherers in the Malay Peninsula, is introduced and explored in detail using elicited data as well as natural conversational data captured on video. Although crosslinguistically unusual, spatial-coordinate demonstratives sit at the interface of issues central to current discourse in semantic-pragmatic theory: demonstrative function, deictic layout, and spatial frames of reference.
  • Burenhult, N. (2004). Spatial deixis in Jahai. In S. Burusphat (Ed.), Papers from the 11th Annual Meeting of the Southeast Asian Linguistics Society 2001 (pp. 87-100). Arizona State University: Program for Southeast Asian Studies.
  • Burenhult, N. (2008). Streams of words: Hydrological lexicon in Jahai. Language Sciences, 30(2/3), 182-199. doi:10.1016/j.langsci.2006.12.005.

    Abstract

    This article investigates hydrological lexicon in Jahai, a Mon-Khmer language of the Malay Peninsula. Setting out from an analysis of the structural and semantic properties as well as the indigenous vs. borrowed origin of lexicon related to drainage, it teases out a set of distinct lexical systems for reference to and description of hydrological features. These include (1) indigenous nominal labels subcategorised by metaphor, (2) borrowed nominal labels, (3) verbals referring to properties and processes of water, (4) a set of motion verbs, and (5) place names. The lexical systems, functionally diverse and driven by different factors, illustrate that principles and strategies of geographical categorisation can vary systematically and profoundly within a single language.
  • Burenhult, N. (2003). Attention, accessibility, and the addressee: The case of the Jahai demonstrative ton. Pragmatics, 13(3), 363-379.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Burenhult, N., & Levinson, S. C. (2008). Language and landscape: A cross-linguistic perspective. Language Sciences, 30(2/3), 135-150. doi:10.1016/j.langsci.2006.12.028.

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burenhult, N. (Ed.). (2008). Language and landscape: Geographical ontology in cross-linguistic perspective [Special Issue]. Language Sciences, 30(2/3).

    Abstract

    This special issue is the outcome of collaborative work on the relationship between language and landscape, carried out in the Language and Cognition Group at the Max Planck Institute for Psycholinguistics. The contributions explore the linguistic categories of landscape terms and place names in nine genetically, typologically and geographically diverse languages, drawing on data from first-hand fieldwork. The present introductory article lays out the reasons why the domain of landscape is of central interest to the language sciences and beyond, and it outlines some of the major patterns that emerge from the cross-linguistic comparison which the papers invite. The data point to considerable variation within and across languages in how systems of landscape terms and place names are ontologised. This has important implications for practical applications from international law to modern navigation systems.
  • Burkhardt, P., Avrutin, S., Piñango, M. M., & Ruigendijk, E. (2008). Slower-than-normal syntactic processing in agrammatic Broca's aphasia: Evidence from Dutch. Journal of Neurolinguistics, 21(2), 120-137. doi:10.1016/j.jneuroling.2006.10.004.

    Abstract

    Studies of agrammatic Broca's aphasia reveal a diverging pattern of performance in the comprehension of reflexive elements: offline, performance seems unimpaired, whereas online—and in contrast to both matching controls and Wernicke's patients—no antecedent reactivation is observed at the reflexive. Here we propose that this difference characterizes the agrammatic comprehension deficit as a result of slower-than-normal syntactic structure formation. To test this characterization, the comprehension of three Dutch agrammatic patients and matching control participants was investigated utilizing the cross-modal lexical decision (CMLD) interference task. Two types of reflexive-antecedent dependencies were tested, which have already been shown to exert distinct processing demands on the comprehension system as a function of the level at which the dependency was formed. Our hypothesis predicts that if the agrammatic system has a processing limitation such that syntactic structure is built in a protracted manner, this limitation will be reflected in delayed interpretation. Confirming previous findings, the Dutch patients show an effect of distinct processing demands for the two types of reflexive-antecedent dependencies but with a temporal delay. We argue that this delayed syntactic structure formation is the result of limited processing capacity that specifically affects the syntactic system.
  • Burkhardt, P. (2008). Two types of definites: Evidence for presupposition cost. In A. Grønn (Ed.), Proceedings of SuB 12 (pp. 66-80). Oslo: ILOS.

    Abstract

    This paper investigates the notion of definiteness from a psycholinguistic perspective and addresses Löbner’s (1987) distinction between semantic and pragmatic definites. To this end inherently definite noun phrases, proper names, and indexicals are investigated as instances of (relatively) rigid designators (i.e. semantic definites) and contrasted with definite noun phrases and third person pronouns that are contingent on context to unambiguously determine their reference (i.e. pragmatic definites). Electrophysiological data provide support for this distinction and further substantiate the claim that proper names differ from definite descriptions. These findings suggest that certain expressions carry a feature of inherent definiteness, which facilitates their discourse integration (i.e. semantic definites), while others rely on the establishment of a relation with prior information, which results in processing cost.
  • Burkhardt, P. (2008). What inferences can tell us about the given-new distinction. In Proceedings of the 18th International Congress of Linguists (pp. 219-220).
  • Burra, N., Hervais-Adelman, A., Kerzel, D., Tamietto, M., de Gelder, B., & Pegna, A. J. (2013). Amygdala Activation for Eye Contact Despite Complete Cortical Blindness. The Journal of Neuroscience, 33(25), 10483-10489. doi:10.1523/jneurosci.3994-12.2013.

    Abstract

    Cortical blindness refers to the loss of vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex. A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer. Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex.
  • Burra, N., Hervais-Adelman, A., Celeghin, A., de Gelder, B., & Pegna, A. J. (2019). Affective blindsight relies on low spatial frequencies. Neuropsychologia, 128, 44-49. doi:10.1016/j.neuropsychologia.2017.10.009.

    Abstract

    The human brain can process facial expressions of emotions rapidly and without awareness. Several studies in patients with damage to their primary visual cortices have shown that they may be able to guess the emotional expression on a face despite their cortical blindness. This non-conscious processing, called affective blindsight, may arise through an intact subcortical visual route that leads from the superior colliculus to the pulvinar, and thence to the amygdala. This pathway is thought to process the crude visual information conveyed by the low spatial frequencies of the stimuli.

    In order to investigate whether this is the case, we studied a patient (TN) with bilateral cortical blindness and affective blindsight. An fMRI paradigm was performed in which fearful and neutral expressions were presented using faces that were either unfiltered, or filtered to remove high or low spatial frequencies. Unfiltered fearful faces produced right amygdala activation although the patient was unaware of the presence of the stimuli. More importantly, the low spatial frequency components of fearful faces continued to produce right amygdala activity while the high spatial frequency components did not. Our findings thus confirm that the visual information present in the low spatial frequencies is sufficient to produce affective blindsight, further suggesting that its existence could rely on the subcortical colliculo-pulvino-amygdalar pathway.
  • Cai, Z. G., Conell, L., & Holler, J. (2013). Time does not flow without language: Spatial distance affects temporal duration regardless of movement or direction. Psychonomic Bulletin & Review, 20(5), 973-980. doi:10.3758/s13423-013-0414-3.

    Abstract

    Much evidence has suggested that people conceive of time as flowing directionally in transverse space (e.g., from left to right for English speakers). However, this phenomenon has never been tested in a fully nonlinguistic paradigm where neither stimuli nor task use linguistic labels, which raises the possibility that time is directional only when reading/writing direction has been evoked. In the present study, English-speaking participants viewed a video where an actor sang a note while gesturing and reproduced the duration of the sung note by pressing a button. Results showed that the perceived duration of the note was increased by a long-distance gesture, relative to a short-distance gesture. This effect was equally strong for gestures moving from left to right and from right to left and was not dependent on gestures depicting movement through space; a weaker version of the effect emerged with static gestures depicting spatial distance. Since both our gesture stimuli and temporal reproduction task were nonlinguistic, we conclude that the spatial representation of time is nondirectional: Movement contributes, but is not necessary, to the representation of temporal information in a transverse timeline.
  • Calandruccio, L., Brouwer, S., Van Engen, K. J., Dhar, S., & Bradlow, A. R. (2013). Masking release due to linguistic and phonetic dissimilarity between the target and masker speech. American Journal of Audiology, 22, 157-164. doi:10.1044/1059-0889(2013/12-0072.

    Abstract

    Purpose: To investigate masking release for speech maskers for linguistically and phonetically close (English and Dutch) and distant (English and Mandarin) language pairs. Method: Thirty-two monolingual speakers of English with normal audiometric thresholds participated in the study. Data are reported for an English sentence recognition task in English and for Dutch and Mandarin competing speech maskers (Experiment 1) and noise maskers (Experiment 2) that were matched either to the long-term average speech spectra or to the temporal modulations of the speech maskers from Experiment 1. Results: Listener performance increased as the target-tomasker linguistic distance increased (English-in-English < English-in-Dutch < English-in-Mandarin). Conclusion: Spectral differences between maskers can account for some, but not all, of the variation in performance between maskers; however, temporal differences did not seem to play a significant role.
  • Campisi, E., & Ozyurek, A. (2013). Iconicity as a communicative strategy: Recipient design in multimodal demonstrations for adults and children. Journal of Pragmatics, 47, 14-27. doi:10.1016/j.pragma.2012.12.007.

    Abstract

    Humans are the only species that uses communication to teach new knowledge to novices, usually to children (Tomasello, 1999 and Csibra and Gergely, 2006). This context of communication can employ “demonstrations” and it takes place with or without the help of objects (Clark, 1996). Previous research has focused on understanding the nature of demonstrations for very young children and with objects involved. However, little is known about the strategies used in demonstrating an action to an older child in comparison to another adult and without the use of objects, i.e., with gestures only. We tested if during demonstration of an action speakers use different degrees of iconicity in gestures for a child compared to an adult. 18 Italian subjects described to a camera how to make coffee imagining the listener as a 12-year-old child, a novice or an expert adult. While speech was found more informative both for the novice adult and for the child compared to the expert adult, the rate of iconic gestures increased and they were more informative and bigger only for the child compared to both of the adult conditions. Iconicity in gestures can be a powerful communicative strategy in teaching new knowledge to children in demonstrations and this is in line with claims that it can be used as a scaffolding device in grounding knowledge in experience (Perniss et al., 2010).
  • Cappuccio, M. L., Chu, M., & Kita, S. (2013). Pointing as an instrumental gesture: Gaze representation through indication. Humana.Mente: Journal of Philosophical Studies, 24, 125-149.

    Abstract

    We call those gestures “instrumental” that can enhance certain thinking processes of an agent by offering him representational models of his actions in a virtual space of imaginary performative possibilities. We argue that pointing is an instrumental gesture in that it represents geometrical information on one’s own gaze direction (i.e., a spatial model for attentional/ocular fixation/orientation), and provides a ritualized template for initiating gaze coordination and joint attention. We counter two possible objections, asserting respectively that the representational content of pointing is not constitutive, but derived from language, and that pointing directly solicits gaze coordination, without representing it. We consider two studies suggesting that attention and spatial perception are actively modified by one’s own pointing activity: the first study shows that pointing gestures help children link sets of objects to their corresponding number words; the second, that adults are faster and more accurate in counting when they point.
  • Capredon, M., Brucato, N., Tonasso, L., Choesmel-Cadamuro, V., Ricaut, F.-X., Razafindrazaka, H., Ratolojanahary, M. A., Randriamarolaza, L.-P., Champion, B., & Dugoujon, J.-M. (2013). Tracing Arab-Islamic Inheritance in Madagascar: Study of the Y-chromosome and Mitochondrial DNA in the Antemoro. PLoS One, 8(11): e80932. doi:10.1371/journal.pone.0080932.

    Abstract

    Madagascar is located at the crossroads of the Asian and African worlds and is therefore of particular interest for studies on human population migration. Within the large human diversity of the Great Island, we focused our study on a particular ethnic group, the Antemoro. Their culture presents an important Arab-Islamic influence, but the question of an Arab biological inheritance remains unresolved. We analyzed paternal (n=129) and maternal (n=135) lineages of this ethnic group. Although the majority of Antemoro genetic ancestry comes from sub-Saharan African and Southeast Asian gene pools, we observed in their paternal lineages two specific haplogroups (J1 and T1) linked to Middle Eastern origins. This inheritance was restricted to some Antemoro sub-groups. Statistical analyses tended to confirm significant Middle Eastern genetic contribution. This study gives a new perspective to the large human genetic diversity in Madagascar
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., & Sirigu, A. (2008). Neural Bases of Sequence Processing in Action and Language. Language Learning, 58(1), 179-199. doi:10.1111/j.1467-9922.2008.00470.x.

    Abstract

    Real-time estimation of what we will do next is a crucial prerequisite
    of purposive behavior. During the planning of goal-oriented actions, for
    instance, the temporal and causal organization of upcoming subsequent
    moves needs to be predicted based on our knowledge of events. A forward
    computation of sequential structure is also essential for planning
    contiguous discourse segments and syntactic patterns in language. The
    neural encoding of sequential event knowledge and its domain dependency
    is a central issue in cognitive neuroscience. Converging evidence shows
    the involvement of a dedicated neural substrate, including the
    prefrontal cortex and Broca's area, in the representation and the
    processing of sequential event structure. After reviewing major
    representational models of sequential mechanisms in action and language,
    we discuss relevant neuropsychological and neuroimaging findings on the
    temporal organization of sequencing and sequence processing in both
    domains, suggesting that sequential event knowledge may be modularly
    organized through prefrontal and frontal subregions.
  • Carrion Castillo, A., Van der Haegen, L., Tzourio-Mazoyer, N., Kavaklioglu, T., Badillo, S., Chavent, M., Saracco, J., Brysbaert, M., Fisher, S. E., Mazoyer, B., & Francks, C. (2019). Genome sequencing for rightward hemispheric language dominance. Genes, Brain and Behavior, 18(5): e12572. doi:10.1111/gbb.12572.

    Abstract

    Most people have left‐hemisphere dominance for various aspects of language processing, but only roughly 1% of the adult population has atypically reversed, rightward hemispheric language dominance (RHLD). The genetic‐developmental program that underlies leftward language laterality is unknown, as are the causes of atypical variation. We performed an exploratory whole‐genome‐sequencing study, with the hypothesis that strongly penetrant, rare genetic mutations might sometimes be involved in RHLD. This was by analogy with situs inversus of the visceral organs (left‐right mirror reversal of the heart, lungs and so on), which is sometimes due to monogenic mutations. The genomes of 33 subjects with RHLD were sequenced and analyzed with reference to large population‐genetic data sets, as well as 34 subjects (14 left‐handed) with typical language laterality. The sample was powered to detect rare, highly penetrant, monogenic effects if they would be present in at least 10 of the 33 RHLD cases and no controls, but no individual genes had mutations in more than five RHLD cases while being un‐mutated in controls. A hypothesis derived from invertebrate mechanisms of left‐right axis formation led to the detection of an increased mutation load, in RHLD subjects, within genes involved with the actin cytoskeleton. The latter finding offers a first, tentative insight into molecular genetic influences on hemispheric language dominance.

    Additional information

    gbb12572-sup-0001-AppendixS1.docx
  • Carrion Castillo, A., Franke, B., & Fisher, S. E. (2013). Molecular genetics of dyslexia: An overview. Dyslexia, 19(4), 214-240. doi:10.1002/dys.1464.

    Abstract

    Dyslexia is a highly heritable learning disorder with a complex underlying genetic architecture. Over the past decade, researchers have pinpointed a number of candidate genes that may contribute to dyslexia susceptibility. Here, we provide an overview of the state of the art, describing how studies have moved from mapping potential risk loci, through identification of associated gene variants, to characterization of gene function in cellular and animal model systems. Work thus far has highlighted some intriguing mechanistic pathways, such as neuronal migration, axon guidance, and ciliary biology, but it is clear that we still have much to learn about the molecular networks that are involved. We end the review by highlighting the past, present, and future contributions of the Dutch Dyslexia Programme to studies of genetic factors. In particular, we emphasize the importance of relating genetic information to intermediate neurobiological measures, as well as the value of incorporating longitudinal and developmental data into molecular designs
  • Casasanto, D. (2008). Similarity and proximity: When does close in space mean close in mind? Memory & Cognition, 36(6), 1047-1056. doi:10.3758/MC.36.6.1047.

    Abstract

    People often describe things that are similar as close and things that are dissimilar as far apart. Does the way people talk about similarity reveal something fundamental about the way they conceptualize it? Three experiments tested the relationship between similarity and spatial proximity that is encoded in metaphors in language. Similarity ratings for pairs of words or pictures varied as a function of how far apart the stimuli appeared on the computer screen, but the influence of distance on similarity differed depending on the type of judgments the participants made. Stimuli presented closer together were rated more similar during conceptual judgments of abstract entities or unseen object properties but were rated less similar during perceptual judgments of visual appearance. These contrasting results underscore the importance of testing predictions based on linguistic metaphors experimentally and suggest that our sense of similarity arises from our ability to combine available perceptual information with stored knowledge of experiential regularities.
  • Casasanto, D. (2008). Who's afraid of the big bad Whorf? Crosslinguistic differences in temporal language and thought. Language Learning, 58(suppl. 1), 63-79. doi:10.1111/j.1467-9922.2008.00462.x.

    Abstract

    The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
  • Casasanto, D., & Boroditsky, L. (2008). Time in the mind: Using space to think about time. Cognition, 106, 579-573. doi:10.1016/j.cognition.2007.03.004.

    Abstract

    How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people’s more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.
  • Casillas, M., & Cristia, A. (2019). A step-by-step guide to collecting and analyzing long-format speech environment (LFSE) recordings. Collabra, 5(1): 24. doi:10.1525/collabra.209.

    Abstract

    Recent years have seen rapid technological development of devices that can record communicative behavior as participants go about daily life. This paper is intended as an end-to-end methodological guidebook for potential users of these technologies, including researchers who want to study children’s or adults’ communicative behavior in everyday contexts. We explain how long-format speech environment (LFSE) recordings provide a unique view on language use and how they can be used to complement other measures at the individual and group level. We aim to help potential users of these technologies make informed decisions regarding research design, hardware, software, and archiving. We also provide information regarding ethics and implementation, issues that are difficult to navigate for those new to this technology, and on which little or no resources are available. This guidebook offers a concise summary of information for new users and points to sources of more detailed information for more advanced users. Links to discussion groups and community-augmented databases are also provided to help readers stay up-to-date on the latest developments.
  • Casillas, M., Rafiee, A., & Majid, A. (2019). Iranian herbalists, but not cooks, are better at naming odors than laypeople. Cognitive Science, 43(6): e12763. doi:10.1111/cogs.12763.

    Abstract

    Odor naming is enhanced in communities where communication about odors is a central part of daily life (e.g., wine experts, flavorists, and some hunter‐gatherer groups). In this study, we investigated how expert knowledge and daily experience affect the ability to name odors in a group of experts that has not previously been investigated in this context—Iranian herbalists; also called attars—as well as cooks and laypeople. We assessed naming accuracy and consistency for 16 herb and spice odors, collected judgments of odor perception, and evaluated participants' odor meta‐awareness. Participants' responses were overall more consistent and accurate for more frequent and familiar odors. Moreover, attars were more accurate than both cooks and laypeople at naming odors, although cooks did not perform significantly better than laypeople. Attars' perceptual ratings of odors and their overall odor meta‐awareness suggest they are also more attuned to odors than the other two groups. To conclude, Iranian attars—but not cooks—are better odor namers than laypeople. They also have greater meta‐awareness and differential perceptual responses to odors. These findings further highlight the critical role that expertise and type of experience have on olfactory functions.

    Additional information

    Supplementary Materials
  • Casillas, M., & Frank, M. C. (2013). The development of predictive processes in children’s discourse understanding. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society. (pp. 299-304). Austin,TX: Cognitive Society.

    Abstract

    We investigate children’s online predictive processing as it occurs naturally, in conversation. We showed 1–7 year-olds short videos of improvised conversation between puppets, controlling for available linguistic information through phonetic manipulation. Even one- and two-year-old children made accurate and spontaneous predictions about when a turn-switch would occur: they gazed at the upcoming speaker before they heard a response begin. This predictive skill relies on both lexical and prosodic information together, and is not tied to either type of information alone. We suggest that children integrate prosodic, lexical, and visual information to effectively predict upcoming linguistic material in conversation.
  • Castells-Nobau, A., Eidhof, I., Fenckova, M., Brenman-Suttner, D. B., Scheffer-de Gooyert, J. M., Christine, S., Schellevis, R. L., Van der Laan, K., Quentin, C., Van Ninhuijs, L., Hofmann, F., Ejsmont, R., Fisher, S. E., Kramer, J. M., Sigrist, S. J., Simon, A. F., & Schenck, A. (2019). Conserved regulation of neurodevelopmental processes and behavior by FoxP in Drosophila. PLoS One, 14(2): e211652. doi:10.1371/journal.pone.0211652.

    Abstract

    FOXP proteins form a subfamily of evolutionarily conserved transcription factors involved in the development and functioning of several tissues, including the central nervous system. In humans, mutations in FOXP1 and FOXP2 have been implicated in cognitive deficits including intellectual disability and speech disorders. Drosophila exhibits a single ortholog, called FoxP, but due to a lack of characterized mutants, our understanding of the gene remains poor. Here we show that the dimerization property required for mammalian FOXP function is conserved in Drosophila. In flies, FoxP is enriched in the adult brain, showing strong expression in ~1000 neurons of cholinergic, glutamatergic and GABAergic nature. We generate Drosophila loss-of-function mutants and UAS-FoxP transgenic lines for ectopic expression, and use them to characterize FoxP function in the nervous system. At the cellular level, we demonstrate that Drosophila FoxP is required in larvae for synaptic morphogenesis at axonal terminals of the neuromuscular junction and for dendrite development of dorsal multidendritic sensory neurons. In the developing brain, we find that FoxP plays important roles in α-lobe mushroom body formation. Finally, at a behavioral level, we show that Drosophila FoxP is important for locomotion, habituation learning and social space behavior of adult flies. Our work shows that Drosophila FoxP is important for regulating several neurodevelopmental processes and behaviors that are related to human disease or vertebrate disease model phenotypes. This suggests a high degree of functional conservation with vertebrate FOXP orthologues and established flies as a model system for understanding FOXP related pathologies.
  • Cathomas, F., Azzinnari, D., Bergamini, G., Sigrist, H., Buerge, M., Hoop, V., Wicki, B., Goetze, L., Soares, S. M. P., Kukelova, D., Seifritz, E., Goebbels, S., Nave, K.-A., Ghandour, M. S., Seoighe, C., Hildebrandt, T., Leparc, G., Klein, H., Stupka, E., Hengerer, B. and 1 moreCathomas, F., Azzinnari, D., Bergamini, G., Sigrist, H., Buerge, M., Hoop, V., Wicki, B., Goetze, L., Soares, S. M. P., Kukelova, D., Seifritz, E., Goebbels, S., Nave, K.-A., Ghandour, M. S., Seoighe, C., Hildebrandt, T., Leparc, G., Klein, H., Stupka, E., Hengerer, B., & Pryce, C. R. (2019). Oligodendrocyte gene expression is reduced by and influences effects of chronic social stress in mice. Genes, Brain and Behavior, 18(1): e12475. doi:10.1111/gbb.12475.

    Abstract

    Oligodendrocyte gene expression is downregulated in stress-related neuropsychiatric disorders,
    including depression. In mice, chronic social stress (CSS) leads to depression-relevant changes
    in brain and emotional behavior, and the present study shows the involvement of oligodendrocytes in this model. In C57BL/6 (BL/6) mice, RNA-sequencing (RNA-Seq) was conducted with
    prefrontal cortex, amygdala and hippocampus from CSS and controls; a gene enrichment database for neurons, astrocytes and oligodendrocytes was used to identify cell origin of deregulated genes, and cell deconvolution was applied. To assess the potential causal contribution of
    reduced oligodendrocyte gene expression to CSS effects, mice heterozygous for the oligodendrocyte gene cyclic nucleotide phosphodiesterase (Cnp1) on a BL/6 background were studied;
    a 2 genotype (wildtype, Cnp1+/−
    ) × 2 environment (control, CSS) design was used to investigate
    effects on emotional behavior and amygdala microglia. In BL/6 mice, in prefrontal cortex and
    amygdala tissue comprising gray and white matter, CSS downregulated expression of multiple
    oligodendroycte genes encoding myelin and myelin-axon-integrity proteins, and cell deconvolution identified a lower proportion of oligodendrocytes in amygdala. Quantification of oligodendrocyte proteins in amygdala gray matter did not yield evidence for reduced translation,
    suggesting that CSS impacts primarily on white matter oligodendrocytes or the myelin transcriptome. In Cnp1 mice, social interaction was reduced by CSS in Cnp1+/− mice specifically;
    using ionized calcium-binding adaptor molecule 1 (IBA1) expression, microglia activity was
    increased additively by Cnp1+/− and CSS in amygdala gray and white matter. This study provides back-translational evidence that oligodendrocyte changes are relevant to the pathophysiology and potentially the treatment of stress-related neuropsychiatric disorders.
  • Cattani, A., Floccia, C., Kidd, E., Pettenati, P., Onofrio, D., & Volterra, V. (2019). Gestures and words in naming: Evidence from crosslinguistic and crosscultural comparison. Language Learning, 69(3), 709-746. doi:10.1111/lang.12346.

    Abstract

    We report on an analysis of spontaneous gesture production in 2‐year‐old children who come from three countries (Italy, United Kingdom, Australia) and who speak two languages (Italian, English), in an attempt to tease apart the influence of language and culture when comparing children from different cultural and linguistic environments. Eighty‐seven monolingual children aged 24–30 months completed an experimental task measuring their comprehension and production of nouns and predicates. The Italian children scored significantly higher than the other groups on all lexical measures. With regard to gestures, British children produced significantly fewer pointing and speech combinations compared to Italian and Australian children, who did not differ from each other. In contrast, Italian children produced significantly more representational gestures than the other two groups. We conclude that spoken language development is primarily influenced by the input language over gesture production, whereas the combination of cultural and language environments affects gesture production.
  • Chang, Y.-N., Monaghan, P., & Welbourne, S. (2019). A computational model of reading across development: Effects of literacy onset on language processing. Journal of Memory and Language, 108: 104025. doi:10.1016/j.jml.2019.05.003.

    Abstract

    Cognitive development is shaped by interactions between cognitive architecture and environmental experiences
    of the growing brain. We examined the extent to which this interaction during development could be observed in
    language processing. We focused on age of acquisition (AoA) effects in reading, where early-learned words tend
    to be processed more quickly and accurately relative to later-learned words. We implemented a computational
    model including representations of print, sound and meaning of words, with training based on children’s gradual
    exposure to language. The model produced AoA effects in reading and lexical decision, replicating the larger
    effects of AoA when semantic representations are involved. Further, the model predicted that AoA would relate
    to differing use of the reading system, with words acquired before versus after literacy onset with distinctive
    accessing of meaning and sound representations. An analysis of behaviour from the English Lexicon project was
    consistent with the predictions: Words acquired before literacy are more likely to access meaning via sound,
    showing a suppressed AoA effect, whereas words acquired after literacy rely more on direct print to meaning
    mappings, showing an exaggerated AoA effect. The reading system reveals vestigial traces of acquisition reflected
    in differing use of word representations during reading.
  • Chang, Y.-N., & Monaghan, P. (2019). Quantity and diversity of preliteracy language exposure both affect literacy development: Evidence from a computational model of reading. Scientific Studies of Reading, 23(3), 235-253. doi:10.1080/10888438.2018.1529177.

    Abstract

    Diversity of vocabulary knowledge and quantity of language exposure prior to literacy are key predictors of reading development. However, diversity and quantity of exposure are difficult to distinguish in behavioural studies, and so the causal relations with literacy are not well known. We tested these relations by training a connectionist triangle model of reading that learned to map between semantic; phonological; and, later, orthographic forms of words. The model first learned to map between phonology and semantics, where we manipulated the quantity and diversity of this preliterate language experience. Then the model learned to read. Both diversity and quantity of exposure had unique effects on reading performance, with larger effects for written word comprehension than for reading fluency. The results further showed that quantity of preliteracy language exposure was beneficial only when this was to a varied vocabulary and could be an impediment when exposed to a limited vocabulary.
  • Chang, F., Kidd, E., & Rowland, C. F. (2013). Prediction in processing is a by-product of language learning [Commentary on Pickering & Garrod: An integrated theory of language production and comprehension]. Behavioral and Brain Sciences, 36(4), 350-351. doi:10.1017/S0140525X12001495.

    Abstract

    Both children and adults predict the content of upcoming language, suggesting that prediction is useful for learning as well as processing. We present an alternative model which can explain prediction behaviour as a by-product of language learning. We suggest that a consideration of language acquisition places important constraints on Pickering & Garrod's (P&G's) theory.
  • Chen, X. S., White, W. T. J., Collins, L. J., & Penny, D. (2008). Computational identification of four spliceosomal snRNAs from the deep-branch eukaryote Giardia intestinalis. PLoS One, 3(8), e3106. doi:10.1371/journal.pone.0003106.

    Abstract

    RNAs processing other RNAs is very general in eukaryotes, but is not clear to what extent it is ancestral to eukaryotes. Here we focus on pre-mRNA splicing, one of the most important RNA-processing mechanisms in eukaryotes. In most eukaryotes splicing is predominantly catalysed by the major spliceosome complex, which consists of five uridine-rich small nuclear RNAs (U-snRNAs) and over 200 proteins in humans. Three major spliceosomal introns have been found experimentally in Giardia; one Giardia U-snRNA (U5) and a number of spliceosomal proteins have also been identified. However, because of the low sequence similarity between the Giardia ncRNAs and those of other eukaryotes, the other U-snRNAs of Giardia had not been found. Using two computational methods, candidates for Giardia U1, U2, U4 and U6 snRNAs were identified in this study and shown by RT-PCR to be expressed. We found that identifying a U2 candidate helped identify U6 and U4 based on interactions between them. Secondary structural modelling of the Giardia U-snRNA candidates revealed typical features of eukaryotic U-snRNAs. We demonstrate a successful approach to combine computational and experimental methods to identify expected ncRNAs in a highly divergent protist genome. Our findings reinforce the conclusion that spliceosomal small-nuclear RNAs existed in the last common ancestor of eukaryotes.
  • Chen, A., & Mennen, I. (2008). Encoding interrogativity intonationally in a second language. In P. Barbosa, S. Madureira, & C. Reis (Eds.), Proceedings of the 4th International Conferences on Speech Prosody (pp. 513-516). Campinas: Editora RG/CNPq.

    Abstract

    This study investigated how untutored learners encode interrogativity intonationaly in a second language. Questions produced in free conversation were selected from longitudinal data of four untutored Italian learners of English. The questions were mostly wh-questions (WQs) and declarative questions (DQs). We examined the use of three cross-linguistically attested question cues: final rise, high peak and late peak. It was found that across learners the final rise occurred more frequently in DQs than in WQs. This is in line with the Functional Hypothesis whereby less syntactically-marked questions are more intonationally marked. However, the use of peak height and alignment is less consistent. The peak of the nuclear pitch accent was not necessarily higher and later in DQs than in WQs. The difference in learners’ exploitation of these cues can be explained by the relative importance of a question cue in the target language.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, A. (2003). Language dependence in continuation intonation. In M. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS.) (pp. 1069-1072). Rundle Mall, SA, Austr.: Causal Productions Pty.
  • Chen, A. (2003). Reaction time as an indicator to discrete intonational contrasts in English. In Proceedings of Eurospeech 2003 (pp. 97-100).

    Abstract

    This paper reports a perceptual study using a semantically motivated identification task in which we investigated the nature of two pairs of intonational contrasts in English: (1) normal High accent vs. emphatic High accent; (2) early peak alignment vs. late peak alignment. Unlike previous inquiries, the present study employs an on-line method using the Reaction Time measurement, in addition to the measurement of response frequencies. Regarding the peak height continuum, the mean RTs are shortest for within-category identification but longest for across-category identification. As for the peak alignment contrast, no identification boundary emerges and the mean RTs only reflect a difference between peaks aligned with the vowel onset and peaks aligned elsewhere. We conclude that the peak height contrast is discrete but the previously claimed discreteness of the peak alignment contrast is not borne out.
  • Cho, T., & McQueen, J. M. (2004). Phonotactics vs. phonetic cues in native and non-native listening: Dutch and Korean listeners' perception of Dutch and English. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1301-1304). Seoul: Sunjijn Printing Co.

    Abstract

    We investigated how listeners of two unrelated languages, Dutch and Korean, process phonotactically legitimate and illegitimate sounds spoken in Dutch and American English. To Dutch listeners, unreleased word-final stops are phonotactically illegal because word-final stops in Dutch are generally released in isolation, but to Korean listeners, released final stops are illegal because word-final stops are never released in Korean. Two phoneme monitoring experiments showed a phonotactic effect: Dutch listeners detected released stops more rapidly than unreleased stops whereas the reverse was true for Korean listeners. Korean listeners with English stimuli detected released stops more accurately than unreleased stops, however, suggesting that acoustic-phonetic cues associated with released stops improve detection accuracy. We propose that in non-native speech perception, phonotactic legitimacy in the native language speeds up phoneme recognition, the richness of acousticphonetic cues improves listening accuracy, and familiarity with the non-native language modulates the relative influence of these two factors.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & McQueen, J. M. (2008). Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics, 36, 239-249. doi:doi:10.1016/j.wocn.2007.06.001.

    Abstract

    This study tests whether potential differences in the perceptual robustness of speech sounds influence continuous-speech processes. Two phoneme-monitoring experiments examined place assimilation in Korean. In Experiment 1, Koreans monitored for targets which were either labials (/p,m/) or alveolars (/t,n/), and which were either unassimilated or assimilated to a following /k/ in two-word utterances. Listeners detected unaltered (unassimilated) labials faster and more accurately than assimilated labials; there was no such advantage for unaltered alveolars. In Experiment 2, labial–velar differences were tested using conditions in which /k/ and /p/ were illegally assimilated to a following /t/. Unassimilated sounds were detected faster than illegally assimilated sounds, but this difference tended to be larger for /k/ than for /p/. These place-dependent asymmetries suggest that differences in the perceptual robustness of segments play a role in shaping phonological patterns.
  • Cho, T., & Johnson, E. K. (2004). Acoustic correlates of phrase-internal lexical boundaries in Dutch. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 1297-1300). Seoul: Sunjin Printing Co.

    Abstract

    The aim of this study was to determine if Dutch speakers reliably signal phrase-internal lexical boundaries, and if so, how. Six speakers recorded 4 pairs of phonemically identical strong-weak-strong (SWS) strings with matching syllable boundaries but mismatching intended word boundaries (e.g. reis # pastei versus reispas # tij, or more broadly C1V2(C)#C2V2(C)C3V3(C) vs. C1V2(C)C2V2(C)#C3V3(C)). An Analysis of Variance revealed 3 acoustic parameters that were significantly greater in S#WS items (C2 DURATION, RIME1 DURATION, C3 BURST AMPLITUDE) and 5 parameters that were significantly greater in the SW#S items (C2 VOT, C3 DURATION, RIME2 DURATION, RIME3 DURATION, and V2 AMPLITUDE). Additionally, center of gravity measurements suggested that the [s] to [t] coarticulation was greater in reis # pa[st]ei versus reispa[s] # [t]ij. Finally, a Logistic Regression Analysis revealed that the 3 parameters (RIME1 DURATION, RIME2 DURATION, and C3 DURATION) contributed most reliably to a S#WS versus SW#S classification.
  • Cho, T. (2003). Lexical stress, phrasal accent and prosodic boundaries in the realization of domain-initial stops in Dutch. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhs 2003) (pp. 2657-2660). Adelaide: Causal Productions.

    Abstract

    This study examines the effects of prosodic boundaries, lexical stress, and phrasal accent on the acoustic realization of stops (/t, d/) in Dutch, with special attention paid to language-specificity in the phonetics-prosody interface. The results obtained from various acoustic measures show systematic phonetic variations in the production of /t d/ as a function of prosodic position, which may be interpreted as being due to prosodicallyconditioned articulatory strengthening. Shorter VOTs were found for the voiceless stop /t/ in prosodically stronger locations (as opposed to longer VOTs in this position in English). The results suggest that prosodically-driven phonetic realization is bounded by a language-specific phonological feature system.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Christoffels, I. K., Ganushchak, L. Y., & Koester, D. (2013). Language conflict in translation; An ERP study of translation production. Journal of Cognitive Psychology, 25, 646-664. doi:10.1080/20445911.2013.821127.

    Abstract

    Although most bilinguals can translate with relative ease, the underlying neuro-cognitive processes are poorly understood. Using event-related brain potentials (ERPs) we investigated the temporal course of word translation. Participants translated words from and to their first (L1, Dutch) and second (L2, English) language while ERPs were recorded. Interlingual homographs (IHs) were included to introduce language conflict. IHs share orthographic form but have different meanings in L1 and L2 (e.g., room in Dutch refers to cream). Results showed that the brain distinguished between translation directions as early as 200 ms after word presentation: the P2 amplitudes were more positive in the L1L2 translation direction. The N400 was also modulated by translation direction, with more negative amplitudes in the L2L1 translation direction. Furthermore, the IHs were translated more slowly, induced more errors, and elicited more negative N400 amplitudes than control words. In a naming experiment, participants read aloud the same words in L1 or L2 while ERPs were recorded. Results showed no effect of either IHs or language, suggesting that task schemas may be crucially related to language control in translation. Furthermore, translation appears to involve conceptual processing in both translation directions, and the task goal appears to influence how words are processed.

    Files private

    Request files
  • Chu, M., & Kita, S. (2008). Spontaneous gestures during mental rotation tasks: Insights into the microdevelopment of the motor strategy. Journal of Experimental Psychology: General, 137, 706-723. doi:10.1037/a0013157.

    Abstract

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand–object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation. Hand–object interaction gestures were produced earlier than object-movement gestures, the rate of both types of gestures decreased, and gestures became more distant from the stimulus object over trials (Experiments 1 and 3). Furthermore, in the first few trials, object-movement gestures increased, whereas hand–object interaction gestures decreased, and this change of motor strategies was also reflected in the type of verbal description of rotation in the concurrent speech (Experiment 2). This change of motor strategies was hampered when gestures were prohibited (Experiment 4). The authors concluded that the motor strategy becomes less dependent on agentive action on the object, and also becomes internalized over the course of the experiment, and that gesture facilitates the former process. When solving a problem regarding the physical world, adults go through developmental processes similar to internalization and symbolic distancing in young children, albeit within a much shorter time span.
  • Clahsen, H., Sonnenstuhl, I., Hadler, M., & Eisenbeiss, S. (2008). Morphological paradigms in language processing and language disorders. Transactions of the Philological Society, 99(2), 247-277. doi:10.1111/1467-968X.00082.

    Abstract

    We present results from two cross‐modal morphological priming experiments investigating regular person and number inflection on finite verbs in German. We found asymmetries in the priming patterns between different affixes that can be predicted from the structure of the paradigm. We also report data from language disorders which indicate that inflectional errors produced by language‐impaired adults and children tend to occur within a given paradigm dimension, rather than randomly across the paradigm. We conclude that morphological paradigms are used by the human language processor and can be systematically affected in language disorders.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Cohen, E., & Haun, D. B. M. (2013). The development of tag-based cooperation via a socially acquired trait. Evolution and Human Behavior, 24, 230-235. doi:10.1016/j.evolhumbehav.2013.02.001.

    Abstract

    Recent theoretical models have demonstrated that phenotypic traits can support the non-random assortment of cooperators in a population, thereby permitting the evolution of cooperation. In these “tag-based models”, cooperators modulate cooperation according to an observable and hard-to-fake trait displayed by potential interaction partners. Socially acquired vocalizations in general, and speech accent among humans in particular, are frequently proposed as hard to fake and hard to hide traits that display sufficient cross-populational variability to reliably guide such social assortment in fission–fusion societies. Adults’ sensitivity to accent variation in social evaluation and decisions about cooperation is well-established in sociolinguistic research. The evolutionary and developmental origins of these biases are largely unknown, however. Here, we investigate the influence of speech accent on 5–10-year-old children's developing social and cooperative preferences across four Brazilian Amazonian towns. Two sites have a single dominant accent, and two sites have multiple co-existing accent varieties. We found that children's friendship and resource allocation preferences were guided by accent only in sites characterized by accent heterogeneity. Results further suggest that this may be due to a more sensitively tuned ear for accent variation. The demonstrated local-accent preference did not hold in the face of personal cost. Results suggest that mechanisms guiding tag-based assortment are likely tuned according to locally relevant tag-variation.

    Additional information

    Cohen_Suppl_Mat_2013.docx
  • Comasco, E., Schijven, D., de Maeyer, H., Vrettou, M., Nylander, I., Sundström-Poromaa, I., & Olivier, J. D. A. (2019). Constitutive serotonin transporter reduction resembles maternal separation with regard to stress-related gene expression. ACS Chemical Neuroscience, 10, 3132-3142. doi:10.1021/acschemneuro.8b00595.

    Abstract

    Interactive effects between allelic variants of the serotonin transporter (5-HTT) promoter-linked polymorphic region (5-HTTLPR) and stressors on depression symptoms have been documented, as well as questioned, by meta-analyses. Translational models of constitutive 5-htt reduction and experimentally controlled stressors often led to inconsistent behavioral and molecular findings and often did not include females. The present study sought to investigate the effect of 5-htt genotype, maternal separation, and sex on the expression of stress-related candidate genes in the rat hippocampus and frontal cortex. The mRNA expression levels of Avp, Pomc, Crh, Crhbp, Crhr1, Bdnf, Ntrk2, Maoa, Maob, and Comt were assessed in the hippocampus and frontal cortex of 5-htt ± and 5-htt +/+ male and female adult rats exposed, or not, to daily maternal separation for 180 min during the first 2 postnatal weeks. Gene- and brain region-dependent, but sex-independent, interactions between 5-htt genotype and maternal separation were found. Gene expression levels were higher in 5-htt +/+ rats not exposed to maternal separation compared with the other experimental groups. Maternal separation and 5-htt +/− genotype did not yield additive effects on gene expression. Correlative relationships, mainly positive, were observed within, but not across, brain regions in all groups except in non-maternally separated 5-htt +/+ rats. Gene expression patterns in the hippocampus and frontal cortex of rats exposed to maternal separation resembled the ones observed in rats with reduced 5-htt expression regardless of sex. These results suggest that floor effects of 5-htt reduction and maternal separation might explain inconsistent findings in humans and rodents
  • Connell, L., Cai, Z. G., & Holler, J. (2013). Do you see what I'm singing? Visuospatial movement biases pitch perception. Brain and Cognition, 81, 124-130. doi:10.1016/j.bandc.2012.09.005.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Cook, A. E., & Meyer, A. S. (2008). Capacity demands of phoneme selection in word production: New evidence from dual-task experiments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 886-899. doi:10.1037/0278-7393.34.4.886.

    Abstract

    Three dual-task experiments investigated the capacity demands of phoneme selection in picture naming. On each trial, participants named a target picture (Task 1) and carried out a tone discrimination task (Task 2). To vary the time required for phoneme selection, the authors combined the targets with phonologically related or unrelated distractor pictures (Experiment 1) or words, which were clearly visible (Experiment 2) or masked (Experiment 3). When pictures or masked words were presented, the tone discrimination and picture naming latencies were shorter in the related condition than in the unrelated condition, which indicates that phoneme selection requires central processing capacity. However, when the distractor words were clearly visible, the facilitatory effect was confined to the picture naming latencies. This pattern arose because the visible related distractor words facilitated phoneme selection but slowed down speech monitoring processes that had to be completed before the response to the tone could be selected.
  • Cooke, M., & Scharenborg, O. (2008). The Interspeech 2008 consonant challenge. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1765-1768). ISCA Archive.

    Abstract

    Listeners outperform automatic speech recognition systems at every level, including the very basic level of consonant identification. What is not clear is where the human advantage originates. Does the fault lie in the acoustic representations of speech or in the recognizer architecture, or in a lack of compatibility between the two? Many insights can be gained by carrying out a detailed human-machine comparison. The purpose of the Interspeech 2008 Consonant Challenge is to promote focused comparisons on a task involving intervocalic consonant identification in noise, with all participants using the same training and test data. This paper describes the Challenge, listener results and baseline ASR performance.
  • Cooper, N., & Cutler, A. (2004). Perception of non-native phonemes in noise. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 469-472). Seoul: Sunjijn Printing Co.

    Abstract

    We report an investigation of the perception of American English phonemes by Dutch listeners proficient in English. Listeners identified either the consonant or the vowel in most possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (16 dB, 8 dB, and 0 dB). Effects of signal-to-noise ratio on vowel and consonant identification are discussed as a function of syllable position and of relationship to the native phoneme inventory. Comparison of the results with previously reported data from native listeners reveals that noise affected the responding of native and non-native listeners similarly.
  • Corps, R. E., Pickering, M. J., & Gambi, C. (2019). Predicting turn-ends in discourse context. Language, Cognition and Neuroscience, 34(5), 615-627. doi:10.1080/23273798.2018.1552008.

    Abstract

    Research suggests that during conversation, interlocutors coordinate their utterances by predicting the speaker’s forthcoming utterance and its end. In two experiments, we used a button-pressing task, in which participants pressed a button when they thought a speaker reached the end of their utterance, to investigate what role the wider discourse plays in turn-end prediction. Participants heard two-utterance sequences, in which the content of the second utterance was or was not constrained by the content of the first. In both experiments, participants responded earlier, but not more precisely, when the first utterance was constraining rather than unconstraining. Response times and precision were unaffected by whether they listened to dialogues or monologues (Experiment 1) and by whether they read the first utterance out loud or silently (Experiment 2), providing no indication that activation of production mechanisms facilitates prediction. We suggest that content predictions aid comprehension but not turn-end prediction.

    Additional information

    plcp_a_1552008_sm1646.pdf
  • Cousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V. and 50 moreCousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V., Kaakinen, M., Sovio, U., Pouta, A., Das, S., Lagou, V., Power, C., Prokopenko, I., Evans, D. M., Kemp, J. P., St Pourcain, B., Ring, S., Palotie, A., Kajantie, E., Osmond, C., Lehtimäki, T., Viikari, J. S., Kähönen, M., Warrington, N. M., Lye, S. J., Palmer, L. J., Tiesler, C. M. T., Flexeder, C., Montgomery, G. W., Medland, S. E., Hofman, A., Hakonarson, H., Guxens, M., Bartels, M., Salomaa, V., Murabito, J. M., Kaprio, J., Sørensen, T. I. A., Ballester, F., Bisgaard, H., Boomsma, D. I., Koppelman, G. H., Grant, S. F. A., Jaddoe, V. W. V., Martin, N. G., Heinrich, J., Pennell, C. E., Raitakari, O. T., Eriksson, J. G., Smith, G. D., Hyppönen, E., Järvelin, M.-R., McCarthy, M. I., Ripatti, S., Widén, E., Consortium ReproGen, & Consortium Early Growth Genetics (EGG) (2013). Genome-wide association and longitudinal analyses reveal genetic loci linking pubertal height growth, pubertal timing and childhood adiposity. Human Molecular Genetics, 22(13), 2735-2747. doi:10.1093/hmg/ddt104.

    Abstract

    The pubertal height growth spurt is a distinctive feature of childhood growth reflecting both the central onset of puberty and local growth factors. Although little is known about the underlying genetics, growth variability during puberty correlates with adult risks for hormone-dependent cancer and adverse cardiometabolic health. The only gene so far associated with pubertal height growth, LIN28B, pleiotropically influences childhood growth, puberty and cancer progression, pointing to shared underlying mechanisms. To discover genetic loci influencing pubertal height and growth and to place them in context of overall growth and maturation, we performed genome-wide association meta-analyses in 18 737 European samples utilizing longitudinally collected height measurements. We found significant associations (P < 1.67 × 10(-8)) at 10 loci, including LIN28B. Five loci associated with pubertal timing, all impacting multiple aspects of growth. In particular, a novel variant correlated with expression of MAPK3, and associated both with increased prepubertal growth and earlier menarche. Another variant near ADCY3-POMC associated with increased body mass index, reduced pubertal growth and earlier puberty. Whereas epidemiological correlations suggest that early puberty marks a pathway from rapid prepubertal growth to reduced final height and adult obesity, our study shows that individual loci associating with pubertal growth have variable longitudinal growth patterns that may differ from epidemiological observations. Overall, this study uncovers part of the complex genetic architecture linking pubertal height growth, the timing of puberty and childhood obesity and provides new information to pinpoint processes linking these traits.
  • Cozijn, R., Vonk, W., & Noordman, L. G. M. (2003). Afleidingen uit oogbewegingen: De invloed van het connectief 'omdat' op het maken van causale inferenties. Gramma/TTT, 9, 141-156.
  • Crasborn, O., & Sloetjes, H. (2008). Enhanced ELAN functionality for sign language corpora. In Proceedings of the 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language Corpora (pp. 39-43).

    Abstract

    The multimedia annotation tool ELAN was enhanced within the Corpus NGT project by a number of new and improved functions. Most of these functions were not specific to working with sign language video data, and can readily be used for other annotation purposes as well. Their direct utility for working with large amounts of annotation files during the development and use of the Corpus NGT project is what unites the various functions, which are described in this paper. In addition, we aim to characterise future developments that will be needed in order to work efficiently with larger amounts of annotation files, for which a closer integration with the use and display of metadata is foreseen.
  • Crasborn, O. A., & Zwitserlood, I. (2008). The Corpus NGT: An online corpus for professionals and laymen. In O. A. Crasborn, T. Hanke, E. Efthimiou, I. Zwitserlood, & E. Thoutenhooft (Eds.), Construction and Exploitation of Sign Language Corpora. (pp. 44-49). Paris: ELDA.

    Abstract

    The Corpus NGT is an ambitious effort to record and archive video data from Sign Language of the Netherlands (Nederlandse Gebarentaal: NGT), guaranteeing online access to all interested parties and long-term availability. Data are collected from 100 native signers of NGT of different ages and from various regions in the country. Parts of these data are annotated and/or translated; the annotations and translations are part of the corpus. The Corpus NGT is accommodated in the Browsable Corpus based at the Max Planck Institute for Psycholinguistics. In this paper we share our experiences in data collection, video processing, annotation/translation and licensing involved in building the corpus.
  • Cristia, A. (2008). Cue weighting at different ages. Purdue Linguistics Association Working Papers, 1, 87-105.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A., & Seidl, A. (2008). Is infants' learning of sound patterns constrained by phonological features? Language Learning and Development, 4, 203-227. doi:10.1080/15475440802143109.

    Abstract

    Phonological patterns in languages often involve groups of sounds rather than individual sounds, which may be explained if phonology operates on the abstract features shared by those groups (Troubetzkoy, 193957. Troubetzkoy , N. 1939/1969 . Principles of phonology , Berkeley : University of California Press . View all references/1969; Chomsky & Halle, 19688. Chomsky , N. and Halle , M. 1968 . The sound pattern of English , New York : Harper and Row . View all references). Such abstract features may be present in the developing grammar either because they are part of a Universal Grammar included in the genetic endowment of humans (e.g., Hale, Kissock and Reiss, 200618. Hale , M. , Kissock , M. and Reiss , C. 2006 . Microvariation, variation, and the features of universal grammar . Lingua , 32 : 402 – 420 . View all references), or plausibly because infants induce features from their linguistic experience (e.g., Mielke, 200438. Mielke , J. 2004 . The emergence of distinctive features , Ohio State University : Unpublished doctoral dissertation . View all references). A first experiment tested 7-month-old infants' learning of an artificial grammar pattern involving either a set of sounds defined by a phonological feature, or a set of sounds that cannot be described with a single feature—an “arbitrary” set. Infants were able to induce the constraint and generalize it to a novel sound only for the set that shared the phonological feature. A second study showed that infants' inability to learn the arbitrary grouping was not due to their inability to encode a constraint on some of the sounds involved.
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Cristia, A., & Seidl, A. (2008). Why cross-linguistic frequency cannot be equated with ease of acquisition. University of Pennsylvania Working Papers in Linguistics, 14(1), 71-82. Retrieved from http://repository.upenn.edu/pwpl/vol14/iss1/6.
  • Croijmans, I., Speed, L., Arshamian, A., & Majid, A. (2019). Measuring the multisensory imagery of wine: The Vividness of Wine Imagery Questionnaire. Multisensory Research, 32(3), 179-195. doi:10.1163/22134808-20191340.

    Abstract

    When we imagine objects or events, we often engage in multisensory mental imagery. Yet, investigations of mental imagery have typically focused on only one sensory modality — vision. One reason for this is that the most common tool for the measurement of imagery, the questionnaire, has been restricted to unimodal ratings of the object. We present a new mental imagery questionnaire that measures multisensory imagery. Specifically, the newly developed Vividness of Wine Imagery Questionnaire (VWIQ) measures mental imagery of wine in the visual, olfactory, and gustatory modalities. Wine is an ideal domain to explore multisensory imagery because wine drinking is a multisensory experience, it involves the neglected chemical senses (smell and taste), and provides the opportunity to explore the effect of experience and expertise on imagery (from wine novices to experts). The VWIQ questionnaire showed high internal consistency and reliability, and correlated with other validated measures of imagery. Overall, the VWIQ may serve as a useful tool to explore mental imagery for researchers, as well as individuals in the wine industry during sommelier training and evaluation of wine professionals.
  • Cronin, K. A. (2013). [Review of the book Chimpanzees of the Lakeshore: Natural history and culture at Mahale by Toshisada Nishida]. Animal Behaviour, 85, 685-686. doi:10.1016/j.anbehav.2013.01.001.

    Abstract

    First paragraph: Motivated by his quest to characterize the society of the last common ancestor of humans and other great apes, Toshisada Nishida set out as a graduate student to the Mahale Mountains on the eastern shore of Lake Tanganyika, Tanzania. This book is a story of his 45 years with the Mahale chimpanzees, or as he calls it, their ethnography. Beginning with his accounts of meeting the Tongwe people and the challenges of provisioning the chimpanzees for habituation, Nishida reveals how he slowly unravelled the unit group and community basis of chimpanzee social organization. The book begins and ends with a feeling of chronological order, starting with his arrival at Mahale and ending with an eye towards the future, with concrete recommendations for protecting wild chimpanzees. However, the bulk of the book is topically organized with chapters on feeding behaviour, growth and development, play and exploration, communication, life histories, sexual strategies, politics and culture.
  • Cronin, K. A., & Snowdon, C. T. (2008). The effects of unequal reward distributions on cooperative problem solving by cottontop tamarins, Saguinus oedipus. Animal Behaviour, 75, 245-257. doi:10.1016/j.anbehav.2007.04.032.

    Abstract

    Cooperation among nonhuman animals has been the topic of much theoretical and empirical research, but few studies have examined systematically the effects of various reward payoffs on cooperative behaviour. Here, we presented heterosexual pairs of cooperatively breeding cottontop tamarins with a cooperative problem-solving task. In a series of four experiments, we examined how the tamarins’ cooperative performance changed under conditions in which (1) both actors were mutually rewarded, (2) both actors were rewarded reciprocally across days, (3) both actors competed for a monopolizable reward and (4) one actor repeatedly delivered a single reward to the other actor. The tamarins showed sensitivity to the reward structure, showing the greatest percentage of trials solved and shortest latency to solve the task in the mutual reward experiment and the lowest percentage of trials solved and longest latency to solve the task in the experiment in which one actor was repeatedly rewarded. However, even in the experiment in which the fewest trials were solved, the tamarins still solved 46 _ 12% of trials and little to no aggression was observed among partners following inequitable reward distributions. The tamarins did, however, show selfish motivation in each of the experiments. Nevertheless, in all experiments, unrewarded individuals continued to cooperate and procure rewards for their social partners.
  • Cuskley, C., Dingemanse, M., Kirby, S., & Van Leeuwen, T. M. (2019). Cross-modal associations and synesthesia: Categorical perception and structure in vowel–color mappings in a large online sample. Behavior Research Methods, 51, 1651-1675. doi:10.3758/s13428-019-01203-7.

    Abstract

    We report associations between vowel sounds, graphemes, and colours collected online from over 1000 Dutch speakers. We provide open materials including a Python implementation of the structure measure, and code for a single page web application to run simple cross-modal tasks. We also provide a full dataset of colour-vowel associations from 1164 participants, including over 200 synaesthetes identified using consistency measures. Our analysis reveals salient patterns in cross-modal associations, and introduces a novel measure of isomorphism in cross-modal mappings. We find that while acoustic features of vowels significantly predict certain mappings (replicating prior work), both vowel phoneme category and grapheme category are even better predictors of colour choice. Phoneme category is the best predictor of colour choice overall, pointing to the importance of phonological representations in addition to acoustic cues. Generally, high/front vowels are lighter, more green, and more yellow than low/back vowels. Synaesthetes respond more strongly on some dimensions, choosing lighter and more yellow colours for high and mid front vowels than non-synaesthetes. We also present a novel measure of cross-modal mappings adapted from ecology, which uses a simulated distribution of mappings to measure the extent to which participants' actual mappings are structured isomorphically across modalities. Synaesthetes have mappings that tend to be more structured than non-synaesthetes, and more consistent colour choices across trials correlate with higher structure scores. Nevertheless, the large majority (~70%) of participants produce structured mappings, indicating that the capacity to make isomorphically structured mappings across distinct modalities is shared to a large extent, even if the exact nature of mappings varies across individuals. Overall, this novel structure measure suggests a distribution of structured cross-modal association in the population, with synaesthetes on one extreme and participants with unstructured associations on the other.
  • Cutler, A. (2008). The abstract representations in speech processing. Quarterly Journal of Experimental Psychology, 61(11), 1601-1619. doi:10.1080/13803390802218542.

    Abstract

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A., Murty, L., & Otake, T. (2003). Rhythmic similarity effects in non-native listening? In Proceedings of the 15th International Congress of Phonetic Sciences (PCPhS 2003) (pp. 329-332). Adelaide: Causal Productions.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. This language-specificity affects listening to non- native speech, if native procedures are applied even though inefficient for the non-native language. However, speakers of two languages with similar rhythmic interpretation should segment their own and the other language similarly. This was observed to date only for related languages (English-Dutch; French-Spanish). We now report experiments in which Japanese listeners heard Telugu, a Dravidian language unrelated to Japanese, and Telugu listeners heard Japanese. In both cases detection of target sequences in speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. These results suggest that Telugu and Japanese listeners use similar procedures in segmenting speech, and support the idea that languages fall into rhythmic classes, with aspects of phonological structure affecting listeners' speech segmentation.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A., McQueen, J. M., Butterfield, S., & Norris, D. (2008). Prelexically-driven perceptual retuning of phoneme boundaries. In Proceedings of Interspeech 2008 (pp. 2056-2056).

    Abstract

    Listeners heard an ambiguous /f-s/ in nonword contexts where only one of /f/ or /s/ was legal (e.g., frul/*srul or *fnud/snud). In later categorisation of a phonetic continuum from /f/ to /s/, their category boundaries had shifted; hearing -rul led to expanded /f/ categories, -nud expanded /s/. Thus phonotactic sequence information alone induces perceptual retuning of phoneme category boundaries; lexical access is not required.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Burchfield, A., & Antoniou, M. (2019). A criterial interlocutor tally for successful talker adaptation? In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1485-1489). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Part of the remarkable efficiency of listening is
    accommodation to unfamiliar talkers’ specific
    pronunciations by retuning of phonemic intercategory
    boundaries. Such retuning occurs in second
    (L2) as well as first language (L1); however, recent
    research with emigrés revealed successful adaptation
    in the environmental L2 but, unprecedentedly, not in
    L1 despite continuing L1 use. A possible explanation
    involving relative exposure to novel talkers is here
    tested in heritage language users with Mandarin as
    family L1 and English as environmental language. In
    English, exposure to an ambiguous sound in
    disambiguating word contexts prompted the expected
    adjustment of phonemic boundaries in subsequent
    categorisation. However, no adjustment occurred in
    Mandarin, again despite regular use. Participants
    reported highly asymmetric interlocutor counts in the
    two languages. We conclude that successful retuning
    ability requires regular exposure to novel talkers in
    the language in question, a criterion not met for the
    emigrés’ or for these heritage users’ L1.

Share this page