Publications

Displaying 101 - 200 of 1279
  • Braun, B., Dainora, A., & Ernestus, M. (2011). An unfamiliar intonation contour slows down online speech comprehension. Language and Cognitive Processes, 26(3), 350 -375. doi:10.1080/01690965.2010.492641.

    Abstract

    This study investigates whether listeners' familiarity with an intonation contour affects speech processing. In three experiments, Dutch participants heard Dutch sentences with normal intonation contours and with unfamiliar ones and performed word-monitoring, lexical decision, or semantic categorisation tasks (the latter two with cross-modal identity priming). The unfamiliar intonation contour slowed down participants on all tasks, which demonstrates that an unfamiliar intonation contour has a robust detrimental effect on speech processing. Since cross-modal identity priming with a lexical decision task taps into lexical access, this effect obtained in this task suggests that an unfamiliar intonation contour hinders lexical access. Furthermore, results from the semantic categorisation task show that the effect of an uncommon intonation contour is long-lasting and hinders subsequent processing. Hence, intonation not only contributes to utterance meaning (emotion, sentence type, and focus), but also affects crucial aspects of the speech comprehension process and is more important than previously thought.
  • Braun, B., & Tagliapietra, L. (2011). On-line interpretation of intonational meaning in L2. Language and Cognitive Processes, 26(2), 224 -235. doi:10.1080/01690965.2010.486209.

    Abstract

    Despite their relatedness, Dutch and German differ in the interpretation of a particular intonation contour, the hat pattern. In the literature, this contour has been described as neutral for Dutch, and as contrastive for German. A recent study supports the idea that Dutch listeners interpret this contour neutrally, compared to the contrastive interpretation of a lexically identical utterance realised with a double peak pattern. In particular, this study showed shorter lexical decision latencies to visual targets (e.g., PELIKAAN, “pelican”) following a contrastively related prime (e.g., flamingo, “flamingo”) only when the primes were embedded in sentences with a contrastive double peak contour, not in sentences with a neutral hat pattern. The present study replicates Experiment 1a of Braun and Tagliapietra (2009) with German learners of Dutch. Highly proficient learners of Dutch differed from Dutch natives in that they showed reliable priming effects for both intonation contours. Thus, the interpretation of intonational meaning in L2 appears to be fast, automatic, and driven by the associations learned in the native language.
  • Braun, B., Lemhofer, K., & Mani, N. (2011). Perceiving unstressed vowels in foreign-accented English. Journal of the Acoustical Society of America, 129, 376-387. doi:10.1121/1.3500688.

    Abstract

    This paper investigated how foreign-accented stress cues affect on-line speech comprehension in British speakers of English. While unstressed English vowels are usually reduced to /@/, Dutch speakers of English only slightly centralize them. Speakers of both languages differentiate stress by suprasegmentals (duration and intensity). In a cross-modal priming experiment, English listeners heard sentences ending in monosyllabic prime fragments—produced by either an English or a Dutch speaker of English—and performed lexical decisions on visual targets. Primes were either stress-matching (“ab” excised from absurd), stress-mismatching (“ab” from absence), or unrelated (“pro” from profound) with respect to the target (e.g., ABSURD). Results showed a priming effect for stress-matching primes only when produced by the English speaker, suggesting that vowel quality is a more important cue to word stress than suprasegmental information. Furthermore, for visual targets with word-initial secondary stress that do not require vowel reduction (e.g., CAMPAIGN), resembling the Dutch way of realizing stress, there was a priming effect for both speakers. Hence, our data suggest that Dutch-accented English is not harder to understand in general, but it is in instances where the language-specific implementation of lexical stress differs across languages.
  • Brehm, L., & Goldrick, M. (2017). Distinguishing discrete and gradient category structure in language: Insights from verb-particle constructions. Journal of Experimental Psychology: Learning, Memory, and Cognition., 43(10), 1537-1556. doi:10.1037/xlm0000390.

    Abstract

    The current work uses memory errors to examine the mental representation of verb-particle constructions (VPCs; e.g., make up the story, cut up the meat). Some evidence suggests that VPCs are represented by a cline in which the relationship between the VPC and its component elements ranges from highly transparent (cut up) to highly idiosyncratic (make up). Other evidence supports a multiple class representation, characterizing VPCs as belonging to discretely separated classes differing in semantic and syntactic structure. We outline a novel paradigm to investigate the representation of VPCs in which we elicit illusory conjunctions, or memory errors sensitive to syntactic structure. We then use a novel application of piecewise regression to demonstrate that the resulting error pattern follows a cline rather than discrete classes. A preregistered replication verifies these findings, and a final preregistered study verifies that these errors reflect syntactic structure. This provides evidence for gradient rather than discrete representations across levels of representation in language processing.
  • Brehm, L., Taschenberger, L., & Meyer, A. S. (2019). Mental representations of partner task cause interference in picture naming. Acta Psychologica, 199: 102888. doi:10.1016/j.actpsy.2019.102888.

    Abstract

    Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner-present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.
  • Brehm, L., & Bock, K. (2017). Referential and lexical forces in number agreement. Language, Cognition and Neuroscience, 32(2), 129-146. doi:10.1080/23273798.2016.1234060.

    Abstract

    In work on grammatical agreement in sentence production, there are accounts of verb number formulation that emphasise the role of whole-structure properties and accounts that emphasise the role of word-driven properties. To evaluate these alternatives, we carried out two experiments that examined a referential (wholistic) contributor to agreement along with two lexical-semantic (local) factors. Both experiments gauged the accuracy and latency of inflected-verb production in order to assess how variations in grammatical number interacted with the other factors. The accuracy of verb production was modulated both by the referential effect of notional number and by the lexical-semantic effects of relatedness and category membership. As an index of agreement difficulty, latencies were little affected by either factor. The findings suggest that agreement is sensitive to referential as well as lexical forces and highlight the importance of lexical-structural integration in the process of sentence production.
  • Brehm, L., Jackson, C. N., & Miller, K. L. (2019). Speaker-specific processing of anomalous utterances. Quarterly Journal of Experimental Psychology, 72(4), 764-778. doi:10.1177/1747021818765547.

    Abstract

    Existing work shows that readers often interpret grammatical errors (e.g., The key to the cabinets *were shiny) and sentence-level blends (“without-blend”: Claudia left without her headphones *off) in a non-literal fashion, inferring that a more frequent or more canonical utterance was intended instead. This work examines how interlocutor identity affects the processing and interpretation of anomalous sentences. We presented anomalies in the context of “emails” attributed to various writers in a self-paced reading paradigm and used comprehension questions to probe how sentence interpretation changed based upon properties of the item and properties of the “speaker.” Experiment 1 compared standardised American English speakers to L2 English speakers; Experiment 2 compared the same standardised English speakers to speakers of a non-Standardised American English dialect. Agreement errors and without-blends both led to more non-literal responses than comparable canonical items. For agreement errors, more non-literal interpretations also occurred when sentences were attributed to speakers of Standardised American English than either non-Standardised group. These data suggest that understanding sentences relies on expectations and heuristics about which utterances are likely. These are based upon experience with language, with speaker-specific differences, and upon more general cognitive biases.

    Additional information

    Supplementary material
  • Brehm, L., & Bock, K. (2013). What counts in grammatical number agreement? Cognition, 128(2), 149-169. doi:10.1016/j.cognition.2013.03.009.

    Abstract

    Both notional and grammatical number affect agreement during language production. To explore their workings, we investigated how semantic integration, a type of conceptual relatedness, produces variations in agreement (Solomon & Pearlmutter, 2004). These agreement variations are open to competing notional and lexical–grammatical number accounts. The notional hypothesis is that changes in number agreement reflect differences in referential coherence: More coherence yields more singularity. The lexical–grammatical hypothesis is that changes in agreement arise from competition between nouns differing in grammatical number: More competition yields more plurality. These hypotheses make opposing predictions about semantic integration. On the notional hypothesis, semantic integration promotes singular agreement. On the lexical–grammatical hypothesis, semantic integration promotes plural agreement. We tested these hypotheses with agreement elicitation tasks in two experiments. Both experiments supported the notional hypothesis, with semantic integration creating faster and more frequent singular agreement. This implies that referential coherence mediates the effect of semantic integration on number agreement.
  • Brennan, J. R., & Martin, A. E. (2019). Phase synchronization varies systematically with linguistic structure composition. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1791): 20190305. doi:10.1098/rstb.2019.0305.

    Abstract

    Computation in neuronal assemblies is putatively reflected in the excitatory and inhibitory cycles of activation distributed throughout the brain. In speech and language processing, coordination of these cycles resulting in phase synchronization has been argued to reflect the integration of information on different timescales (e.g. segmenting acoustics signals to phonemic and syllabic representations; (Giraud and Poeppel 2012 Nat. Neurosci.15, 511 (doi:10.1038/nn.3063)). A natural extension of this claim is that phase synchronization functions similarly to support the inference of more abstract higher-level linguistic structures (Martin 2016 Front. Psychol.7, 120; Martin and Doumas 2017 PLoS Biol. 15, e2000663 (doi:10.1371/journal.pbio.2000663); Martin and Doumas. 2019 Curr. Opin. Behav. Sci.29, 77–83 (doi:10.1016/j.cobeha.2019.04.008)). Hale et al. (Hale et al. 2018 Finding syntax in human encephalography with beam search. arXiv 1806.04127 (http://arxiv.org/abs/1806.04127)) showed that syntactically driven parsing decisions predict electroencephalography (EEG) responses in the time domain; here we ask whether phase synchronization in the form of either inter-trial phrase coherence or cross-frequency coupling (CFC) between high-frequency (i.e. gamma) bursts and lower-frequency carrier signals (i.e. delta, theta), changes as the linguistic structures of compositional meaning (viz., bracket completions, as denoted by the onset of words that complete phrases) accrue. We use a naturalistic story-listening EEG dataset from Hale et al. to assess the relationship between linguistic structure and phase alignment. We observe increased phase synchronization as a function of phrase counts in the delta, theta, and gamma bands, especially for function words. A more complex pattern emerged for CFC as phrase count changed, possibly related to the lack of a one-to-one mapping between ‘size’ of linguistic structure and frequency band—an assumption that is tacit in recent frameworks. These results emphasize the important role that phase synchronization, desynchronization, and thus, inhibition, play in the construction of compositional meaning by distributed neural networks in the brain.
  • Broeder, D., Schonefeld, O., Trippel, T., Van Uytvanck, D., & Witt, A. (2011). A pragmatic approach to XML interoperability — the Component Metadata Infrastructure (CMDI). Proceedings of Balisage: The Markup Conference 2011. Balisage Series on Markup Technologies, 7. doi:10.4242/BalisageVol7.Broeder01.
  • Broeder, D. (2004). 40,000 IMDI sessions. Language Archive Newsletter, 1(4), 12-12.
  • Broeder, D., & Offenga, F. (2004). IMDI Metadata Set 3.0. Language Archive Newsletter, 1(2), 3-3.
  • Broersma, M., & Cutler, A. (2011). Competition dynamics of second-language listening. Quarterly Journal of Experimental Psychology, 64, 74-95. doi:10.1080/17470218.2010.499174.

    Abstract

    Spoken-word recognition in a nonnative language is particularly difficult where it depends on discrimination between confusable phonemes. Four experiments here examine whether this difficulty is in part due to phantom competition from “near-words” in speech. Dutch listeners confuse English /aelig/ and /ε/, which could lead to the sequence daf being interpreted as deaf, or lemp being interpreted as lamp. In auditory lexical decision, Dutch listeners indeed accepted such near-words as real English words more often than English listeners did. In cross-modal priming, near-words extracted from word or phrase contexts (daf from DAFfodil, lemp from eviL EMPire) induced activation of corresponding real words (deaf; lamp) for Dutch, but again not for English, listeners. Finally, by the end of untruncated carrier words containing embedded words or near-words (definite; daffodil) no activation of the real embedded forms (deaf in definite) remained for English or Dutch listeners, but activation of embedded near-words (deaf in daffodil) did still remain, for Dutch listeners only. Misinterpretation of the initial vowel here favoured the phantom competitor and disfavoured the carrier (lexically represented as containing a different vowel). Thus, near-words compete for recognition and continue competing for longer than actually embedded words; nonnative listening indeed involves phantom competition.
  • Brouwer, S. (2013). Continuous recognition memory for spoken words in noise. Proceedings of Meetings on Acoustics, 19: 060117. doi:10.1121/1.4798781.

    Abstract

    Previous research has shown that talker variability affects recognition memory for spoken words (Palmeri et al., 1993). This study examines whether additive noise is similarly retained in memory for spoken words. In a continuous recognition memory task, participants listened to a list of spoken words mixed with noise consisting of a pure tone or of high-pass filtered white noise. The noise and speech were in non-overlapping frequency bands. In Experiment 1, listeners indicated whether each spoken word in the list was OLD (heard before in the list) or NEW. Results showed that listeners were as accurate and as fast at recognizing a word as old if it was repeated with the same or different noise. In Experiment 2, listeners also indicated whether words judged as OLD were repeated with the same or with a different type of noise. Results showed that listeners benefitted from hearing words presented with the same versus different noise. These data suggest that spoken words and temporally-overlapping but spectrally non-overlapping noise are retained or reconstructed together for explicit, but not for implicit recognition memory. This indicates that the extent to which noise variability is retained seems to depend on the depth of processing
  • Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.

    Abstract

    In two eye-tracking experiments we examined whether wider discourse information helps
    the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
    canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
    sentences from a casual speech corpus containing canonical and reduced target words. Target
    word recognition was assessed by measuring eye fixation proportions to four printed words
    on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
    and an unrelated distractor. Target sentences were presented in isolation or with a wider
    discourse context. Experiment 1 revealed that target recognition was facilitated by wider
    discourse information. Importantly, the recognition of reduced forms improved significantly
    when preceded by strongly rather than by weakly supportive discourse contexts. This was not
    the case for canonical forms: listeners' target word recognition was not dependent on the
    degree of supportive context. Experiment 2 showed that the differential context effects in
    Experiment 1 were not due to an additional amount of speaker information. Thus, these data
    suggest that in natural settings a strongly supportive discourse context is more important for
    the recognition of reduced forms than the recognition of canonical forms.
  • Brown, A., & Gullberg, M. (2011). Bidirectional cross-linguistic influence in event conceptualization? Expressions of Path among Japanese learners of English. Bilingualism: Language and Cognition, 14, 79 -94. doi:10.1017/S1366728910000064.

    Abstract

    Typological differences in expressions of motion are argued to have consequences for event conceptualization. In SLA, studies generally find transfer of L1 expressions and accompanying event construals, suggesting resistance to the restructuring of event conceptualization. The current study tackles such restructuring in SLA within the context of bidirectional cross-linguistic influence, focusing on expressions of Path in English and Japanese. We probe the effects of lexicalization patterns on event construal by focusing on different Path components: Source, Via and Goal. Crucially, we compare the same speakers performing both in the L1 and L2 to ascertain whether the languages influence each other. We argue for the potential for restructuring, even at modest levels of L2 proficiency, by showing that not only do L1 patterns shape construal in the L2, but that L2 patterns may subtly and simultaneously broaden construal in the L1 within an individual learner.
  • Brown, P. (2011). Color me bitter: Crossmodal compounding in Tzeltal perception words. The Senses & Society, 6(1), 106-116. doi:10.2752/174589311X12893982233957.

    Abstract

    Within a given language and culture, distinct sensory modalities are often given differential linguistic treatment in ways reflecting cultural ideas about, and uses for, the senses. This article reports on sensory expressions in the Mayan language Tzeltal, spoken in southeastern Mexico. Drawing both on data derived from Tzeltal consultants’ responses to standardized sensory elicitation stimuli and on sensory descriptions produced in more natural contexts, I examine words characterizing sensations in the domains of color and taste. In just these two domains, a limited set of basic terms along with productive word-formation processes of compounding and reduplication are used in analogous ways to produce words that distinguish particular complex sensations or gestalts: e.g. in the color domain, yax-boj-boj (yax ‘grue’ + boj ‘cut’), of mouth stained green from eating green vegetables, or, in the taste domain, chi’-pik-pik (chi’ ‘sweet/salty’ + pik ‘touch’) of a slightly prickly salty taste. I relate the semantics of crossmodal compounds to material technologies involving color and taste (weaving, food production), and to ideas about “hot”/“cold” categories, which provide a cultural rationale for eating practices and medical interventions. I argue that language plays a role in promoting crossmodal associations, resulting in a (partially) culture-specific construction of sensory experience.
  • Brown, A., & Gullberg, M. (2013). L1–L2 convergence in clausal packaging in Japanese and English. Bilingualism: Language and Cognition, 16, 477-494. doi:10.1017/S1366728912000491.

    Abstract

    This research received technical and financial support from Syracuse University, the Max Planck Institute for Psycholinguistics, and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO; MPI 56-384, The Dynamics of Multilingual Processing, awarded to Marianne Gullberg and Peter Indefrey).
  • Brown, P. (1994). The INs and ONs of Tzeltal locative expressions: The semantics of static descriptions of location. Linguistics, 32, 743-790.

    Abstract

    This paper explores how static topological spatial relations such as contiguity, contact, containment, and support are expressed in the Mayan language Tzeltal. Three distinct Tzeltal systems for describing spatial relationships - geographically anchored (place names, geographical coordinates), viewer-centered (deictic), and object-centered (body parts, relational nouns, and dispositional adjectives) - are presented, but the focus here is on the object-centered system of dispositional adjectives in static locative expressions. Tzeltal encodes shape/position/configuration gestalts in verb roots; predicates formed from these are an essential element in locative descriptions. Specificity of shape in the predicate allows spatial reltaions between figure and ground objects to be understood by implication. Tzeltal illustrates an alternative stragegy to that of prepositional languages like English: rather than elaborating shape distinctions in the nouns and minimizing them in the locatives, Tzeltal encodes shape and configuration very precisely in verb roots, leaving many object nouns unspecified for shape. The Tzeltal case thus presents a direct challenge to cognitive science claims that, in both languge and cognition, WHAT is kept distinct from WHERE.
  • Brown-Schmidt, S., & Konopka, A. E. (2011). Experimental approaches to referential domains and the on-line processing of referring expressions in unscripted conversation. Information, 2, 302-326. doi:10.3390/info2020302.

    Abstract

    This article describes research investigating the on-line processing of language in unscripted conversational settings. In particular, we focus on the process of formulating and interpreting definite referring expressions. Within this domain we present results of two eye-tracking experiments addressing the problem of how speakers interrogate the referential domain in preparation to speak, how they select an appropriate expression for a given referent, and how addressees interpret these expressions. We aim to demonstrate that it is possible, and indeed fruitful, to examine unscripted, conversational language using modified experimental designs and standard hypothesis testing procedures.
  • Brugman, H. (2004). ELAN 2.2 now available. Language Archive Newsletter, 1(3), 13-14.
  • Brugman, H., Sloetjes, H., Russel, A., & Klassmann, A. (2004). ELAN 2.3 available. Language Archive Newsletter, 1(4), 13-13.
  • Brugman, H. (2004). ELAN Releases 2.0.2 and 2.1. Language Archive Newsletter, 1(2), 4-4.
  • De Bruin, A., De Groot, A., De Heer, L., Bok, J., Wielinga, P., Hamans, M., van Rotterdam, B., & Janse, I. (2011). Detection of Coxiella burnetii in complex matrices by using multiplex quantitative PCR during a major Q fever outbreak in the Netherlands. Applied and Environmental Microbiology, 77, 6516-6523. doi:10.1128/AEM.05097-11.

    Abstract

    Q fever, caused by Coxiella burnetii, is a zoonosis with a worldwide distribution. A large rural area in the southeast of the Netherlands was heavily affected by Q fever between 2007 and 2009. This initiated the development of a robust and internally controlled multiplex quantitative PCR (qPCR) assay for the detection of C. burnetii DNA in veterinary and environmental matrices on suspected Q fever-affected farms. The qPCR detects three C. burnetii targets (icd, com1, and IS1111) and one Bacillus thuringiensis internal control target (cry1b). Bacillus thuringiensis spores were added to samples to control both DNA extraction and PCR amplification. The performance of the qPCR assay was investigated and showed a high efficiency; a limit of detection of 13.0, 10.6, and 10.4 copies per reaction for the targets icd, com1, and IS1111, respectively; and no crossreactivity with the nontarget organisms tested. Screening for C. burnetii DNA on 29 suspected Q fever-affected farms during the Q fever epidemic in 2008 showed that swabs from dust-accumulating surfaces contained higher levels of C. burnetii DNA than vaginal swabs from goats or sheep. PCR inhibition by coextracted substances was observed in some environmental samples, and 10- or 100-fold dilutions of samples were sufficient to obtain interpretable signals for both the C. burnetii targets and the internal control. The inclusion of an internal control target and three C. burnetii targets in one multiplex qPCR assay showed that complex veterinary and environmental matrices can be screened reliably for the presence of C. burnetii DNA during an outbreak. © 2011, American Society for Microbiology.
  • Buetti, S., Tamietto, M., Hervais-Adelman, A., Kerzel, D., de Gelder, B., & Pegna, A. J. (2013). Dissociation between goal-directed and discrete response localization in a patient with bilateral cortical blindness. Journal of Cognitive Neuroscience, 25(10), 1769-1775. doi:10.1162/jocn_a_00404.

    Abstract

    We investigated localization performance of simple targets in patient TN, who suffered bilateral damage of his primary visual cortex and shows complete cortical blindness. Using a two-alternative forced-choice paradigm, TN was asked to guess the position of left-right targets with goal-directed and discrete manual responses. The results indicate a clear dissociation between goal-directed and discrete responses. TN pointed toward the correct target location in approximately 75% of the trials but was at chance level with discrete responses. This indicates that the residual ability to localize an unseen stimulus depends critically on the possibility to translate a visual signal into a goal-directed motor output at least in certain forms of blindsight.
  • Bulut, T., Hung, Y., Tzeng, O., & Wu, D. (2017). Neural correlates of processing sentences and compound words in Chinese. PLOS ONE, 12(12): e0188526. doi:10.1371/journal.pone.0188526.
  • Burba, I., Colombo, G. I., Staszewsky, L. I., De Simone, M., Devanna, P., Nanni, S., Avitabile, D., Molla, F., Cosentino, S., Russo, I., De Angelis, N., Soldo, A., Biondi, A., Gambini, E., Gaetano, C., Farsetti, A., Pompilio, G., Latini, R., Capogrossi, M. C., & Pesce, M. (2011). Histone Deacetylase Inhibition Enhances Self Renewal and Cardioprotection by Human Cord Blood-Derived CD34+ Cells. PLoS One, 6(7): e22158. doi:10.1371/journal.pone.0022158.

    Abstract

    Use of peripheral blood- or bone marrow-derived progenitors for ischemic heart repair is a feasible option to induce neo-vascularization in ischemic tissues. These cells, named Endothelial Progenitors Cells (EPCs), have been extensively characterized phenotypically and functionally. The clinical efficacy of cardiac repair by EPCs cells remains, however, limited, due to cell autonomous defects as a consequence of risk factors. The devise of “enhancement” strategies has been therefore sought to improve repair ability of these cells and increase the clinical benefit
  • Burenhult, N. (2011). [Review of the book New approaches to Slavic verbs of motion ed. by Victoria Hasko and Renee Perelmutter]. Linguistics, 49, 645-648.
  • Burenhult, N. (2004). Landscape terms and toponyms in Jahai: A field report. Lund Working Papers, 51, 17-29.
  • Burenhult, N., Hill, C., Huber, J., Van Putten, S., Rybka, K., & San Roque, L. (2017). Forests: The cross-linguistic perspective. Geographica Helvetica, 72(4), 455-464. doi:10.5194/gh-72-455-2017.

    Abstract

    Do all humans perceive, think, and talk about tree cover ("forests") in more or less the same way? International forestry programs frequently seem to operate on the assumption that they do. However, recent advances in the language sciences show that languages vary greatly as to how the landscape domain is lexicalized and grammaticalized. Different languages segment and label the large-scale environment and its features according to astonishingly different semantic principles, often in tandem with highly culture-specific practices and ideologies. Presumed basic concepts like mountain, valley, and river cannot in fact be straightforwardly translated across languages. In this paper we describe, compare, and evaluate some of the semantic diversity observed in relation to forests. We do so on the basis of first-hand linguistic field data from a global sample of indigenous categorization systems as they are manifested in the following languages: Avatime (Ghana), Duna (Papua New Guinea), Jahai (Malay Peninsula), Lokono (the Guianas), Makalero (East Timor), and Umpila/Kuuku Ya'u (Cape York Peninsula). We show that basic linguistic categories relating to tree cover vary considerably in their principles of semantic encoding across languages, and that forest is a challenging category from the point of view of intercultural translatability. This has consequences for current global policies and programs aimed at standardizing forest definitions and measurements. It calls for greater attention to categorial diversity in designing and implementing such agendas, and for receptiveness to and understanding of local indigenous classification systems in communicating those agendas on the ground.
  • Burenhult, N., & Majid, A. (2011). Olfaction in Aslian ideology and language. The Senses & Society, 6(1), 19-29. doi:10.2752/174589311X12893982233597.

    Abstract

    The cognitive- and neurosciences have supposed that the perceptual world of the individual is dominated by vision, followed closely by audition, but that olfaction is merely vestigial. Aslian-speaking communities (Austroasiatic, Malay Peninsula) challenge this view. For the Jahai - a small group of rainforest foragers - odor plays a central role in both culture and language. Jahai ideology revolves around a complex set of beliefs that structures the human relationship with the supernatural. Central to this relationship are hearing, vision, and olfaction. In Jahai language, olfaction also receives special attention. There are at least a dozen or so abstract descriptive odor categories that are basic, everyday terms. This lexical elaboration of odor is not unique to the Jahai but can be seen across many contemporary Austroasiatic languages and transcends major cultural and environmental boundaries. These terms appear to be inherited from ancestral language states, suggesting a longstanding preoccupation with odor in this part of the world. Contrary to the prevailing assumption in the cognitive sciences, these languages and cultures demonstrate that odor is far from vestigial in humans.
  • Bürki, A., Ernestus, M., Gendrot, C., Fougeron, C., & Frauenfelder, U. H. (2011). What affects the presence versus absence of schwa and its duration: A corpus analysis of French connected speech. Journal of the Acoustical Society of America, 130, 3980-3991. doi:10.1121/1.3658386.

    Abstract

    This study presents an analysis of over 4000 tokens of words produced as variants with and without schwa in a French corpus of radio-broadcasted speech. In order to determine which of the many variables mentioned in the literature influence variant choice, 17 predictors were tested in the same analysis. Only five of these variables appeared to condition variant choice. The question of the processing stage, or locus, of this alternation process is also addressed in a comparison of the variables that predict variant choice with the variables that predict the acoustic duration of schwa in variants with schwa. Only two variables predicting variant choice also predict schwa duration. The limited overlap between the predictors for variant choice and for schwa duration, combined with the nature of these variables, suggest that the variants without schwa do not result from a phonetic process of reduction; that is, they are not the endpoint of gradient schwa shortening. Rather, these variants are generated early in the production process, either during phonological encoding or word-form retrieval. These results, based on naturally produced speech, provide a useful complement to on-line production experiments using artificial speech tasks.
  • Burra, N., Hervais-Adelman, A., Kerzel, D., Tamietto, M., de Gelder, B., & Pegna, A. J. (2013). Amygdala Activation for Eye Contact Despite Complete Cortical Blindness. The Journal of Neuroscience, 33(25), 10483-10489. doi:10.1523/jneurosci.3994-12.2013.

    Abstract

    Cortical blindness refers to the loss of vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex. A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer. Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex.
  • Burra, N., Hervais-Adelman, A., Celeghin, A., de Gelder, B., & Pegna, A. J. (2019). Affective blindsight relies on low spatial frequencies. Neuropsychologia, 128, 44-49. doi:10.1016/j.neuropsychologia.2017.10.009.

    Abstract

    The human brain can process facial expressions of emotions rapidly and without awareness. Several studies in patients with damage to their primary visual cortices have shown that they may be able to guess the emotional expression on a face despite their cortical blindness. This non-conscious processing, called affective blindsight, may arise through an intact subcortical visual route that leads from the superior colliculus to the pulvinar, and thence to the amygdala. This pathway is thought to process the crude visual information conveyed by the low spatial frequencies of the stimuli.

    In order to investigate whether this is the case, we studied a patient (TN) with bilateral cortical blindness and affective blindsight. An fMRI paradigm was performed in which fearful and neutral expressions were presented using faces that were either unfiltered, or filtered to remove high or low spatial frequencies. Unfiltered fearful faces produced right amygdala activation although the patient was unaware of the presence of the stimuli. More importantly, the low spatial frequency components of fearful faces continued to produce right amygdala activity while the high spatial frequency components did not. Our findings thus confirm that the visual information present in the low spatial frequencies is sufficient to produce affective blindsight, further suggesting that its existence could rely on the subcortical colliculo-pulvino-amygdalar pathway.
  • Cai, Z. G., Conell, L., & Holler, J. (2013). Time does not flow without language: Spatial distance affects temporal duration regardless of movement or direction. Psychonomic Bulletin & Review, 20(5), 973-980. doi:10.3758/s13423-013-0414-3.

    Abstract

    Much evidence has suggested that people conceive of time as flowing directionally in transverse space (e.g., from left to right for English speakers). However, this phenomenon has never been tested in a fully nonlinguistic paradigm where neither stimuli nor task use linguistic labels, which raises the possibility that time is directional only when reading/writing direction has been evoked. In the present study, English-speaking participants viewed a video where an actor sang a note while gesturing and reproduced the duration of the sung note by pressing a button. Results showed that the perceived duration of the note was increased by a long-distance gesture, relative to a short-distance gesture. This effect was equally strong for gestures moving from left to right and from right to left and was not dependent on gestures depicting movement through space; a weaker version of the effect emerged with static gestures depicting spatial distance. Since both our gesture stimuli and temporal reproduction task were nonlinguistic, we conclude that the spatial representation of time is nondirectional: Movement contributes, but is not necessary, to the representation of temporal information in a transverse timeline.
  • Calandruccio, L., Brouwer, S., Van Engen, K. J., Dhar, S., & Bradlow, A. R. (2013). Masking release due to linguistic and phonetic dissimilarity between the target and masker speech. American Journal of Audiology, 22, 157-164. doi:10.1044/1059-0889(2013/12-0072.

    Abstract

    Purpose: To investigate masking release for speech maskers for linguistically and phonetically close (English and Dutch) and distant (English and Mandarin) language pairs. Method: Thirty-two monolingual speakers of English with normal audiometric thresholds participated in the study. Data are reported for an English sentence recognition task in English and for Dutch and Mandarin competing speech maskers (Experiment 1) and noise maskers (Experiment 2) that were matched either to the long-term average speech spectra or to the temporal modulations of the speech maskers from Experiment 1. Results: Listener performance increased as the target-tomasker linguistic distance increased (English-in-English < English-in-Dutch < English-in-Mandarin). Conclusion: Spectral differences between maskers can account for some, but not all, of the variation in performance between maskers; however, temporal differences did not seem to play a significant role.
  • Callaghan, E., Holland, C., & Kessler, K. (2017). Age-Related Changes in the Ability to Switch between Temporal and Spatial Attention. Frontiers in Aging Neuroscience, 9: 28. doi:10.3389/fnagi.2017.00028.

    Abstract

    Background: Identifying age-related changes in cognition that contribute towards reduced driving performance is important for the development of interventions to improve older adults' driving and prolong the time that they can continue to drive. While driving, one is often required to switch from attending to events changing in time, to distribute attention spatially. Although there is extensive research into both spatial attention and temporal attention and how these change with age, the literature on switching between these modalities of attention is limited within any age group. Methods: Age groups (21-30, 40-49, 50-59, 60-69 and 70+ years) were compared on their ability to switch between detecting a target in a rapid serial visual presentation (RSVP) stream and detecting a target in a visual search display. To manipulate the cost of switching, the target in the RSVP stream was either the first item in the stream (Target 1st), towards the end of the stream (Target Mid), or absent from the stream (Distractor Only). Visual search response times and accuracy were recorded. Target 1st trials behaved as no-switch trials, as attending to the remaining stream was not necessary. Target Mid and Distractor Only trials behaved as switch trials, as attending to the stream to the end was required. Results: Visual search response times (RTs) were longer on "Target Mid" and "Distractor Only" trials in comparison to "Target 1st" trials, reflecting switch-costs. Larger switch-costs were found in both the 40-49 and 60-69 years group in comparison to the 21-30 years group when switching from the Target Mid condition. Discussion: Findings warrant further exploration as to whether there are age-related changes in the ability to switch between these modalities of attention while driving. If older adults display poor performance when switching between temporal and spatial attention while driving, then the development of an intervention to preserve and improve this ability would be beneficial. © 2017 Callaghan, Holland and Kessler.
  • Campisi, E., & Ozyurek, A. (2013). Iconicity as a communicative strategy: Recipient design in multimodal demonstrations for adults and children. Journal of Pragmatics, 47, 14-27. doi:10.1016/j.pragma.2012.12.007.

    Abstract

    Humans are the only species that uses communication to teach new knowledge to novices, usually to children (Tomasello, 1999 and Csibra and Gergely, 2006). This context of communication can employ “demonstrations” and it takes place with or without the help of objects (Clark, 1996). Previous research has focused on understanding the nature of demonstrations for very young children and with objects involved. However, little is known about the strategies used in demonstrating an action to an older child in comparison to another adult and without the use of objects, i.e., with gestures only. We tested if during demonstration of an action speakers use different degrees of iconicity in gestures for a child compared to an adult. 18 Italian subjects described to a camera how to make coffee imagining the listener as a 12-year-old child, a novice or an expert adult. While speech was found more informative both for the novice adult and for the child compared to the expert adult, the rate of iconic gestures increased and they were more informative and bigger only for the child compared to both of the adult conditions. Iconicity in gestures can be a powerful communicative strategy in teaching new knowledge to children in demonstrations and this is in line with claims that it can be used as a scaffolding device in grounding knowledge in experience (Perniss et al., 2010).
  • Cappuccio, M. L., Chu, M., & Kita, S. (2013). Pointing as an instrumental gesture: Gaze representation through indication. Humana.Mente: Journal of Philosophical Studies, 24, 125-149.

    Abstract

    We call those gestures “instrumental” that can enhance certain thinking processes of an agent by offering him representational models of his actions in a virtual space of imaginary performative possibilities. We argue that pointing is an instrumental gesture in that it represents geometrical information on one’s own gaze direction (i.e., a spatial model for attentional/ocular fixation/orientation), and provides a ritualized template for initiating gaze coordination and joint attention. We counter two possible objections, asserting respectively that the representational content of pointing is not constitutive, but derived from language, and that pointing directly solicits gaze coordination, without representing it. We consider two studies suggesting that attention and spatial perception are actively modified by one’s own pointing activity: the first study shows that pointing gestures help children link sets of objects to their corresponding number words; the second, that adults are faster and more accurate in counting when they point.
  • Capredon, M., Brucato, N., Tonasso, L., Choesmel-Cadamuro, V., Ricaut, F.-X., Razafindrazaka, H., Ratolojanahary, M. A., Randriamarolaza, L.-P., Champion, B., & Dugoujon, J.-M. (2013). Tracing Arab-Islamic Inheritance in Madagascar: Study of the Y-chromosome and Mitochondrial DNA in the Antemoro. PLoS One, 8(11): e80932. doi:10.1371/journal.pone.0080932.

    Abstract

    Madagascar is located at the crossroads of the Asian and African worlds and is therefore of particular interest for studies on human population migration. Within the large human diversity of the Great Island, we focused our study on a particular ethnic group, the Antemoro. Their culture presents an important Arab-Islamic influence, but the question of an Arab biological inheritance remains unresolved. We analyzed paternal (n=129) and maternal (n=135) lineages of this ethnic group. Although the majority of Antemoro genetic ancestry comes from sub-Saharan African and Southeast Asian gene pools, we observed in their paternal lineages two specific haplogroups (J1 and T1) linked to Middle Eastern origins. This inheritance was restricted to some Antemoro sub-groups. Statistical analyses tended to confirm significant Middle Eastern genetic contribution. This study gives a new perspective to the large human genetic diversity in Madagascar
  • Carlsson, K., Petersson, K. M., Lundqvist, D., Karlsson, A., Ingvar, M., & Öhman, A. (2004). Fear and the amygdala: manipulation of awareness generates differential cerebral responses to phobic and fear-relevant (but nonfeared) stimuli. Emotion, 4(4), 340-353. doi:10.1037/1528-3542.4.4.340.

    Abstract

    Rapid response to danger holds an evolutionary advantage. In this positron emission tomography study, phobics were exposed to masked visual stimuli with timings that either allowed awareness or not of either phobic, fear-relevant (e.g., spiders to snake phobics), or neutral images. When the timing did not permit awareness, the amygdala responded to both phobic and fear-relevant stimuli. With time for more elaborate processing, phobic stimuli resulted in an addition of an affective processing network to the amygdala activity, whereas no activity was found in response to fear-relevant stimuli. Also, right prefrontal areas appeared deactivated, comparing aware phobic and fear-relevant conditions. Thus, a shift from top-down control to an affectively driven system optimized for speed was observed in phobic relative to fear-relevant aware processing.
  • Carota, F., Kriegeskorte, N., Nili, H., & Pulvermüller, F. (2017). Representational Similarity Mapping of Distributional Semantics in Left Inferior Frontal, Middle Temporal, and Motor Cortex. Cerebral Cortex, 27(1), 294-309. doi:10.1093/cercor/bhw379.

    Abstract

    Language comprehension engages a distributed network of frontotemporal, parietal, and sensorimotor regions, but it is still unclear how meaning of words and their semantic relationships are represented and processed within these regions and to which degrees lexico-semantic representations differ between regions and semantic types. We used fMRI and representational similarity analysis to relate word-elicited multivoxel patterns to semantic similarity between action and object words. In left inferior frontal (BA 44-45-47), left posterior middle temporal and left precentral cortex, the similarity of brain response patterns reflected semantic similarity among action-related verbs, as well as across lexical classes-between action verbs and tool-related nouns and, to a degree, between action verbs and food nouns, but not between action verbs and animal nouns. Instead, posterior inferior temporal cortex exhibited a reverse response pattern, which reflected the semantic similarity among object-related nouns, but not action-related words. These results show that semantic similarity is encoded by a range of cortical areas, including multimodal association (e.g., anterior inferior frontal, posterior middle temporal) and modality-preferential (premotor) cortex and that the representational geometries in these regions are partly dependent on semantic type, with semantic similarity among action-related words crossing lexical-semantic category boundaries.
  • Carrion Castillo, A., Maassen, B., Franke, B., Heister, A., Naber, M., Van der Leij, A., Francks, C., & Fisher, S. E. (2017). Association analysis of dyslexia candidate genes in a Dutch longitudinal sample. European Journal of Human Genetics, 25(4), 452-460. doi:10.1038/ejhg.2016.194.

    Abstract

    Dyslexia is a common specific learning disability with a substantive genetic component. Several candidate genes have been proposed to be implicated in dyslexia susceptibility, such as DYX1C1, ROBO1, KIAA0319, and DCDC2. Associations with variants in these genes have also been reported with a variety of psychometric measures tapping into the underlying processes that might be impaired in dyslexic people. In this study, we first conducted a literature review to select single nucleotide polymorphisms (SNPs) in dyslexia candidate genes that had been repeatedly implicated across studies. We then assessed the SNPs for association in the richly phenotyped longitudinal data set from the Dutch Dyslexia Program. We tested for association with several quantitative traits, including word and nonword reading fluency, rapid naming, phoneme deletion, and nonword repetition. In this, we took advantage of the longitudinal nature of the sample to examine if associations were stable across four educational time-points (from 7 to 12 years). Two SNPs in the KIAA0319 gene were nominally associated with rapid naming, and these associations were stable across different ages. Genetic association analysis with complex cognitive traits can be enriched through the use of longitudinal information on trait development.
  • Carrion Castillo, A., Van der Haegen, L., Tzourio-Mazoyer, N., Kavaklioglu, T., Badillo, S., Chavent, M., Saracco, J., Brysbaert, M., Fisher, S. E., Mazoyer, B., & Francks, C. (2019). Genome sequencing for rightward hemispheric language dominance. Genes, Brain and Behavior, 18(5): e12572. doi:10.1111/gbb.12572.

    Abstract

    Most people have left‐hemisphere dominance for various aspects of language processing, but only roughly 1% of the adult population has atypically reversed, rightward hemispheric language dominance (RHLD). The genetic‐developmental program that underlies leftward language laterality is unknown, as are the causes of atypical variation. We performed an exploratory whole‐genome‐sequencing study, with the hypothesis that strongly penetrant, rare genetic mutations might sometimes be involved in RHLD. This was by analogy with situs inversus of the visceral organs (left‐right mirror reversal of the heart, lungs and so on), which is sometimes due to monogenic mutations. The genomes of 33 subjects with RHLD were sequenced and analyzed with reference to large population‐genetic data sets, as well as 34 subjects (14 left‐handed) with typical language laterality. The sample was powered to detect rare, highly penetrant, monogenic effects if they would be present in at least 10 of the 33 RHLD cases and no controls, but no individual genes had mutations in more than five RHLD cases while being un‐mutated in controls. A hypothesis derived from invertebrate mechanisms of left‐right axis formation led to the detection of an increased mutation load, in RHLD subjects, within genes involved with the actin cytoskeleton. The latter finding offers a first, tentative insight into molecular genetic influences on hemispheric language dominance.

    Additional information

    gbb12572-sup-0001-AppendixS1.docx
  • Carrion Castillo, A., Franke, B., & Fisher, S. E. (2013). Molecular genetics of dyslexia: An overview. Dyslexia, 19(4), 214-240. doi:10.1002/dys.1464.

    Abstract

    Dyslexia is a highly heritable learning disorder with a complex underlying genetic architecture. Over the past decade, researchers have pinpointed a number of candidate genes that may contribute to dyslexia susceptibility. Here, we provide an overview of the state of the art, describing how studies have moved from mapping potential risk loci, through identification of associated gene variants, to characterization of gene function in cellular and animal model systems. Work thus far has highlighted some intriguing mechanistic pathways, such as neuronal migration, axon guidance, and ciliary biology, but it is clear that we still have much to learn about the molecular networks that are involved. We end the review by highlighting the past, present, and future contributions of the Dutch Dyslexia Programme to studies of genetic factors. In particular, we emphasize the importance of relating genetic information to intermediate neurobiological measures, as well as the value of incorporating longitudinal and developmental data into molecular designs
  • Casasanto, D. (2011). Different bodies, different minds: The body-specificity of language and thought. Current Directions in Psychological Science, 20, 378-383. doi:10.1177/0963721411422058.

    Abstract

    Do people with different kinds of bodies think differently? According to the bodyspecificity hypothesis (Casasanto 2009), they should. In this article, I review evidence that right- and left-handers, who perform actions in systematically different ways, use correspondingly different areas of the brain for imagining actions and representing the meanings of action verbs. Beyond concrete actions, the way people use their hands also influences the way they represent abstract ideas with positive and negative emotional valence like “goodness,” “honesty,” and “intelligence,” and how they communicate about them in spontaneous speech and gesture. Changing how people use their right and left hands can cause them to think differently, suggesting that motoric differences between right- and left-handers are not merely correlated with cognitive differences. Body-specific patterns of motor experience shape the way we think, communicate, and make decisions
  • Casasanto, D., & Chrysikou, E. G. (2011). When left is "Right": Motor fluency shapes abstract concepts. Psychological Science, 22, 419-422. doi:10.1177/0956797611401755.

    Abstract

    Right- and left-handers implicitly associate positive ideas like "goodness"and "honesty"more strongly with their dominant side
    of space, the side on which they can act more fluently, and negative ideas more strongly with their nondominant side. Here we show that right-handers’ tendency to associate "good" with "right" and "bad" with "left" can be reversed as a result of both
    long- and short-term changes in motor fluency. Among patients who were right-handed prior to unilateral stroke, those with disabled left hands associated "good" with "right," but those with disabled right hands associated "good" with "left,"as natural left-handers do. A similar pattern was found in healthy right-handers whose right or left hand was temporarily handicapped in the laboratory. Even a few minutes of acting more fluently with the left hand can change right-handers’ implicit associations between space and emotional valence, causing a reversal of their usual judgments. Motor experience plays a causal role in shaping abstract thought.
  • Casillas, M., & Cristia, A. (2019). A step-by-step guide to collecting and analyzing long-format speech environment (LFSE) recordings. Collabra, 5(1): 24. doi:10.1525/collabra.209.

    Abstract

    Recent years have seen rapid technological development of devices that can record communicative behavior as participants go about daily life. This paper is intended as an end-to-end methodological guidebook for potential users of these technologies, including researchers who want to study children’s or adults’ communicative behavior in everyday contexts. We explain how long-format speech environment (LFSE) recordings provide a unique view on language use and how they can be used to complement other measures at the individual and group level. We aim to help potential users of these technologies make informed decisions regarding research design, hardware, software, and archiving. We also provide information regarding ethics and implementation, issues that are difficult to navigate for those new to this technology, and on which little or no resources are available. This guidebook offers a concise summary of information for new users and points to sources of more detailed information for more advanced users. Links to discussion groups and community-augmented databases are also provided to help readers stay up-to-date on the latest developments.
  • Casillas, M., Rafiee, A., & Majid, A. (2019). Iranian herbalists, but not cooks, are better at naming odors than laypeople. Cognitive Science, 43(6): e12763. doi:10.1111/cogs.12763.

    Abstract

    Odor naming is enhanced in communities where communication about odors is a central part of daily life (e.g., wine experts, flavorists, and some hunter‐gatherer groups). In this study, we investigated how expert knowledge and daily experience affect the ability to name odors in a group of experts that has not previously been investigated in this context—Iranian herbalists; also called attars—as well as cooks and laypeople. We assessed naming accuracy and consistency for 16 herb and spice odors, collected judgments of odor perception, and evaluated participants' odor meta‐awareness. Participants' responses were overall more consistent and accurate for more frequent and familiar odors. Moreover, attars were more accurate than both cooks and laypeople at naming odors, although cooks did not perform significantly better than laypeople. Attars' perceptual ratings of odors and their overall odor meta‐awareness suggest they are also more attuned to odors than the other two groups. To conclude, Iranian attars—but not cooks—are better odor namers than laypeople. They also have greater meta‐awareness and differential perceptual responses to odors. These findings further highlight the critical role that expertise and type of experience have on olfactory functions.

    Additional information

    Supplementary Materials
  • Casillas, M., & Frank, M. C. (2017). The development of children's ability to track and predict turn structure in conversation. Journal of Memory and Language, 92, 234-253. doi:10.1016/j.jml.2016.06.013.

    Abstract

    Children begin developing turn-taking skills in infancy but take several years to fluidly integrate their growing knowledge of language into their turn-taking behavior. In two eye-tracking experiments, we measured children’s anticipatory gaze to upcoming responders while controlling linguistic cues to turn structure. In Experiment 1, we showed English and non-English conversations to English-speaking adults and children. In Experiment 2, we phonetically controlled lexicosyntactic and prosodic cues in English-only speech. Children spontaneously made anticipatory gaze switches by age two and continued improving through age six. In both experiments, children and adults made more anticipatory switches after hearing questions. Consistent with prior findings on adult turn prediction, prosodic information alone did not increase children’s anticipatory gaze shifts. But, unlike prior work with adults, lexical information alone was not sucient either—children’s performance was best overall with lexicosyntax and prosody together. Our findings support an account in which turn tracking and turn prediction emerge in infancy and then gradually become integrated with children’s online linguistic processing.
  • Castells-Nobau, A., Eidhof, I., Fenckova, M., Brenman-Suttner, D. B., Scheffer-de Gooyert, J. M., Christine, S., Schellevis, R. L., Van der Laan, K., Quentin, C., Van Ninhuijs, L., Hofmann, F., Ejsmont, R., Fisher, S. E., Kramer, J. M., Sigrist, S. J., Simon, A. F., & Schenck, A. (2019). Conserved regulation of neurodevelopmental processes and behavior by FoxP in Drosophila. PLoS One, 14(2): e211652. doi:10.1371/journal.pone.0211652.

    Abstract

    FOXP proteins form a subfamily of evolutionarily conserved transcription factors involved in the development and functioning of several tissues, including the central nervous system. In humans, mutations in FOXP1 and FOXP2 have been implicated in cognitive deficits including intellectual disability and speech disorders. Drosophila exhibits a single ortholog, called FoxP, but due to a lack of characterized mutants, our understanding of the gene remains poor. Here we show that the dimerization property required for mammalian FOXP function is conserved in Drosophila. In flies, FoxP is enriched in the adult brain, showing strong expression in ~1000 neurons of cholinergic, glutamatergic and GABAergic nature. We generate Drosophila loss-of-function mutants and UAS-FoxP transgenic lines for ectopic expression, and use them to characterize FoxP function in the nervous system. At the cellular level, we demonstrate that Drosophila FoxP is required in larvae for synaptic morphogenesis at axonal terminals of the neuromuscular junction and for dendrite development of dorsal multidendritic sensory neurons. In the developing brain, we find that FoxP plays important roles in α-lobe mushroom body formation. Finally, at a behavioral level, we show that Drosophila FoxP is important for locomotion, habituation learning and social space behavior of adult flies. Our work shows that Drosophila FoxP is important for regulating several neurodevelopmental processes and behaviors that are related to human disease or vertebrate disease model phenotypes. This suggests a high degree of functional conservation with vertebrate FOXP orthologues and established flies as a model system for understanding FOXP related pathologies.
  • Catani, M., Craig, M. C., Forkel, S. J., Kanaan, R., Picchioni, M., Toulopoulou, T., Shergill, S., Williams, S., Murphy, D. G., & McGuire, P. (2011). Altered integrity of perisylvian language pathways in schizophrenia: Relationship to auditory hallucinations. Biological Psychiatry, 70(12), 1143-1150. doi:10.1016/j.biopsych.2011.06.013.

    Abstract

    Background: Functional neuroimaging supports the hypothesis that auditory verbal hallucinations (AVH) in schizophrenia result from altered functional connectivity between perisylvian language regions, although the extent to which AVH are also associated with an altered tract anatomy is less clear.

    Methods: Twenty-eight patients with schizophrenia subdivided into 17 subjects with a history of AVH and 11 without a history of hallucinations and 59 age- and IQ-matched healthy controls were recruited. The number of streamlines, fractional anisotropy (FA), and mean diffusivity were measured along the length of the arcuate fasciculus and its medial and lateral components.

    Results: Patients with schizophrenia had bilateral reduction of FA relative to controls in the arcuate fasciculi (p < .001). Virtual dissection of the subcomponents of the arcuate fasciculi revealed that these reductions were specific to connections between posterior temporal and anterior regions in the inferior frontal and parietal lobe. Also, compared with controls, the reduction in FA of these tracts was highest, and bilateral, in patients with AVH, but in patients without AVH, this reduction was reported only on the left.

    Conclusions: These findings point toward a supraregional network model of AVH in schizophrenia. They support the hypothesis that there may be selective vulnerability of specific anatomical connections to posterior temporal regions in schizophrenia and that extensive bilateral damage is associated with a greater vulnerability to AVH. If confirmed by further studies, these findings may advance our understanding of the anatomical factors that are protective against AVH and predictive of a treatment response.
  • Catani, M., Robertsson, N., Beyh, A., Huynh, V., de Santiago Requejo, F., Howells, H., Barrett, R. L., Aiello, M., Cavaliere, C., Dyrby, T. B., Krug, K., Ptito, M., D'Arceuil, H., Forkel, S. J., & Dell'Acqua, F. (2017). Short parietal lobe connections of the human and monkey brain. Cortex, 97, 339-357. doi:10.1016/j.cortex.2017.10.022.

    Abstract

    The parietal lobe has a unique place in the human brain. Anatomically, it is at the crossroad between the frontal, occipital, and temporal lobes, thus providing a middle ground for multimodal sensory integration. Functionally, it supports higher cognitive functions that are characteristic of the human species, such as mathematical cognition, semantic and pragmatic aspects of language, and abstract thinking. Despite its importance, a comprehensive comparison of human and simian intraparietal networks is missing.

    In this study, we used diffusion imaging tractography to reconstruct the major intralobar parietal tracts in twenty-one datasets acquired in vivo from healthy human subjects and eleven ex vivo datasets from five vervet and six macaque monkeys. Three regions of interest (postcentral gyrus, superior parietal lobule and inferior parietal lobule) were used to identify the tracts. Surface projections were reconstructed for both species and results compared to identify similarities or differences in tract anatomy (i.e., trajectories and cortical projections). In addition, post-mortem dissections were performed in a human brain.

    The largest tract identified in both human and monkey brains is a vertical pathway between the superior and inferior parietal lobules. This tract can be divided into an anterior (supramarginal gyrus) and a posterior (angular gyrus) component in both humans and monkey brains. The second prominent intraparietal tract connects the postcentral gyrus to both supramarginal and angular gyri of the inferior parietal lobule in humans but only to the supramarginal gyrus in the monkey brain. The third tract connects the postcentral gyrus to the anterior region of the superior parietal lobule and is more prominent in monkeys compared to humans. Finally, short U-shaped fibres in the medial and lateral aspects of the parietal lobe were identified in both species. A tract connecting the medial parietal cortex to the lateral inferior parietal cortex was observed in the monkey brain only.

    Our findings suggest a consistent pattern of intralobar parietal connections between humans and monkeys with some differences for those areas that have cytoarchitectonically distinct features in humans. The overall pattern of intraparietal connectivity supports the special role of the inferior parietal lobule in cognitive functions characteristic of humans.
  • Cathomas, F., Azzinnari, D., Bergamini, G., Sigrist, H., Buerge, M., Hoop, V., Wicki, B., Goetze, L., Soares, S. M. P., Kukelova, D., Seifritz, E., Goebbels, S., Nave, K.-A., Ghandour, M. S., Seoighe, C., Hildebrandt, T., Leparc, G., Klein, H., Stupka, E., Hengerer, B. and 1 moreCathomas, F., Azzinnari, D., Bergamini, G., Sigrist, H., Buerge, M., Hoop, V., Wicki, B., Goetze, L., Soares, S. M. P., Kukelova, D., Seifritz, E., Goebbels, S., Nave, K.-A., Ghandour, M. S., Seoighe, C., Hildebrandt, T., Leparc, G., Klein, H., Stupka, E., Hengerer, B., & Pryce, C. R. (2019). Oligodendrocyte gene expression is reduced by and influences effects of chronic social stress in mice. Genes, Brain and Behavior, 18(1): e12475. doi:10.1111/gbb.12475.

    Abstract

    Oligodendrocyte gene expression is downregulated in stress-related neuropsychiatric disorders,
    including depression. In mice, chronic social stress (CSS) leads to depression-relevant changes
    in brain and emotional behavior, and the present study shows the involvement of oligodendrocytes in this model. In C57BL/6 (BL/6) mice, RNA-sequencing (RNA-Seq) was conducted with
    prefrontal cortex, amygdala and hippocampus from CSS and controls; a gene enrichment database for neurons, astrocytes and oligodendrocytes was used to identify cell origin of deregulated genes, and cell deconvolution was applied. To assess the potential causal contribution of
    reduced oligodendrocyte gene expression to CSS effects, mice heterozygous for the oligodendrocyte gene cyclic nucleotide phosphodiesterase (Cnp1) on a BL/6 background were studied;
    a 2 genotype (wildtype, Cnp1+/−
    ) × 2 environment (control, CSS) design was used to investigate
    effects on emotional behavior and amygdala microglia. In BL/6 mice, in prefrontal cortex and
    amygdala tissue comprising gray and white matter, CSS downregulated expression of multiple
    oligodendroycte genes encoding myelin and myelin-axon-integrity proteins, and cell deconvolution identified a lower proportion of oligodendrocytes in amygdala. Quantification of oligodendrocyte proteins in amygdala gray matter did not yield evidence for reduced translation,
    suggesting that CSS impacts primarily on white matter oligodendrocytes or the myelin transcriptome. In Cnp1 mice, social interaction was reduced by CSS in Cnp1+/− mice specifically;
    using ionized calcium-binding adaptor molecule 1 (IBA1) expression, microglia activity was
    increased additively by Cnp1+/− and CSS in amygdala gray and white matter. This study provides back-translational evidence that oligodendrocyte changes are relevant to the pathophysiology and potentially the treatment of stress-related neuropsychiatric disorders.
  • Cattani, A., Floccia, C., Kidd, E., Pettenati, P., Onofrio, D., & Volterra, V. (2019). Gestures and words in naming: Evidence from crosslinguistic and crosscultural comparison. Language Learning, 69(3), 709-746. doi:10.1111/lang.12346.

    Abstract

    We report on an analysis of spontaneous gesture production in 2‐year‐old children who come from three countries (Italy, United Kingdom, Australia) and who speak two languages (Italian, English), in an attempt to tease apart the influence of language and culture when comparing children from different cultural and linguistic environments. Eighty‐seven monolingual children aged 24–30 months completed an experimental task measuring their comprehension and production of nouns and predicates. The Italian children scored significantly higher than the other groups on all lexical measures. With regard to gestures, British children produced significantly fewer pointing and speech combinations compared to Italian and Australian children, who did not differ from each other. In contrast, Italian children produced significantly more representational gestures than the other two groups. We conclude that spoken language development is primarily influenced by the input language over gesture production, whereas the combination of cultural and language environments affects gesture production.
  • Chang, Y.-N., Monaghan, P., & Welbourne, S. (2019). A computational model of reading across development: Effects of literacy onset on language processing. Journal of Memory and Language, 108: 104025. doi:10.1016/j.jml.2019.05.003.

    Abstract

    Cognitive development is shaped by interactions between cognitive architecture and environmental experiences
    of the growing brain. We examined the extent to which this interaction during development could be observed in
    language processing. We focused on age of acquisition (AoA) effects in reading, where early-learned words tend
    to be processed more quickly and accurately relative to later-learned words. We implemented a computational
    model including representations of print, sound and meaning of words, with training based on children’s gradual
    exposure to language. The model produced AoA effects in reading and lexical decision, replicating the larger
    effects of AoA when semantic representations are involved. Further, the model predicted that AoA would relate
    to differing use of the reading system, with words acquired before versus after literacy onset with distinctive
    accessing of meaning and sound representations. An analysis of behaviour from the English Lexicon project was
    consistent with the predictions: Words acquired before literacy are more likely to access meaning via sound,
    showing a suppressed AoA effect, whereas words acquired after literacy rely more on direct print to meaning
    mappings, showing an exaggerated AoA effect. The reading system reveals vestigial traces of acquisition reflected
    in differing use of word representations during reading.
  • Chang, Y.-N., & Monaghan, P. (2019). Quantity and diversity of preliteracy language exposure both affect literacy development: Evidence from a computational model of reading. Scientific Studies of Reading, 23(3), 235-253. doi:10.1080/10888438.2018.1529177.

    Abstract

    Diversity of vocabulary knowledge and quantity of language exposure prior to literacy are key predictors of reading development. However, diversity and quantity of exposure are difficult to distinguish in behavioural studies, and so the causal relations with literacy are not well known. We tested these relations by training a connectionist triangle model of reading that learned to map between semantic; phonological; and, later, orthographic forms of words. The model first learned to map between phonology and semantics, where we manipulated the quantity and diversity of this preliterate language experience. Then the model learned to read. Both diversity and quantity of exposure had unique effects on reading performance, with larger effects for written word comprehension than for reading fluency. The results further showed that quantity of preliteracy language exposure was beneficial only when this was to a varied vocabulary and could be an impediment when exposed to a limited vocabulary.
  • Chang, F., Kidd, E., & Rowland, C. F. (2013). Prediction in processing is a by-product of language learning [Commentary on Pickering & Garrod: An integrated theory of language production and comprehension]. Behavioral and Brain Sciences, 36(4), 350-351. doi:10.1017/S0140525X12001495.

    Abstract

    Both children and adults predict the content of upcoming language, suggesting that prediction is useful for learning as well as processing. We present an alternative model which can explain prediction behaviour as a by-product of language learning. We suggest that a consideration of language acquisition places important constraints on Pickering & Garrod's (P&G's) theory.
  • Chen, X. S., Penny, D., & Collins, L. J. (2011). Characterization of RNase MRP RNA and novel snoRNAs from Giardia intestinalis and Trichomonas vaginalis. BMC Genomics, 12, 550. doi:10.1186/1471-2164-12-550.

    Abstract

    Background: Eukaryotic cells possess a complex network of RNA machineries which function in RNA-processing and cellular regulation which includes transcription, translation, silencing, editing and epigenetic control. Studies of model organisms have shown that many ncRNAs of the RNA-infrastructure are highly conserved, but little is known from non-model protists. In this study we have conducted a genome-scale survey of medium-length ncRNAs from the protozoan parasites Giardia intestinalis and Trichomonas vaginalis. Results: We have identified the previously ‘missing’ Giardia RNase MRP RNA, which is a key ribozyme involved in pre-rRNA processing. We have also uncovered 18 new H/ACA box snoRNAs, expanding our knowledge of the H/ ACA family of snoRNAs. Conclusions: Results indicate that Giardia intestinalis and Trichomonas vaginalis, like their distant multicellular relatives, contain a rich infrastructure of RNA-based processing. From here we can investigate the evolution of RNA processing networks in eukaryotes.
  • Chen, A., Gussenhoven, C., & Rietveld, T. (2004). Language specificity in perception of paralinguistic intonational meaning. Language and Speech, 47(4), 311-349.

    Abstract

    This study examines the perception of paralinguistic intonational meanings deriving from Ohala’s Frequency Code (Experiment 1) and Gussenhoven’s Effort Code (Experiment 2) in British English and Dutch. Native speakers of British English and Dutch listened to a number of stimuli in their native language and judged each stimulus on four semantic scales deriving from these two codes: SELF-CONFIDENT versus NOT SELF-CONFIDENT, FRIENDLY versus NOT FRIENDLY (Frequency Code); SURPRISED versus NOT SURPRISED, and EMPHATIC versus NOT EMPHATIC (Effort Code). The stimuli, which were lexically equivalent across the two languages, differed in pitch contour, pitch register and pitch span in Experiment 1, and in pitch register, peak height, peak alignment and end pitch in Experiment 2. Contrary to the traditional view that the paralinguistic usage of intonation is similar across languages, it was found that British English and Dutch listeners differed considerably in the perception of “confident,” “friendly,” “emphatic,” and “surprised.” The present findings support a theory of paralinguistic meaning based on the universality of biological codes, which however acknowledges a languagespecific component in the implementation of these codes.
  • Chen, X. S., Reader, R. H., Hoischen, A., Veltman, J. A., Simpson, N. H., Francks, C., Newbury, D. F., & Fisher, S. E. (2017). Next-generation DNA sequencing identifies novel gene variants and pathways involved in specific language impairment. Scientific Reports, 7: 46105. doi:10.1038/srep46105.

    Abstract

    A significant proportion of children have unexplained problems acquiring proficient linguistic skills despite adequate intelligence and opportunity. Developmental language disorders are highly heritable with substantial societal impact. Molecular studies have begun to identify candidate loci, but much of the underlying genetic architecture remains undetermined. We performed whole-exome sequencing of 43 unrelated probands affected by severe specific language impairment, followed by independent validations with Sanger sequencing, and analyses of segregation patterns in parents and siblings, to shed new light on aetiology. By first focusing on a pre-defined set of known candidates from the literature, we identified potentially pathogenic variants in genes already implicated in diverse language-related syndromes, including ERC1, GRIN2A, and SRPX2. Complementary analyses suggested novel putative candidates carrying validated variants which were predicted to have functional effects, such as OXR1, SCN9A and KMT2D. We also searched for potential “multiple-hit” cases; one proband carried a rare AUTS2 variant in combination with a rare inherited haplotype affecting STARD9, while another carried a novel nonsynonymous variant in SEMA6D together with a rare stop-gain in SYNPR. On broadening scope to all rare and novel variants throughout the exomes, we identified biological themes that were enriched for such variants, including microtubule transport and cytoskeletal regulation.
  • Chen, A. (2011). Tuning information packaging: Intonational realization of topic and focus in child Dutch. Journal of Child Language, 38, 1055-1083. doi:10.1017/S0305000910000541.

    Abstract

    This study examined how four- to five-year-olds and seven- to eight-year-olds used intonation (accent placement and accent type) to encode topic and focus in Dutch. Naturally spoken declarative sentences with either sentence-initial topic and sentence-final focus or sentence-initial focus and sentence-final topic were elicited via a picture-matching game. Results showed that the four- to five-year-olds were adult-like in topic-marking, but were not yet fully adult-like in focus-marking, in particular, in the use of accent type in sentence-final focus (i.e. showing no preference for H*L). Between age five and seven, the use of accent type was further developed. In contrast to the four- to five-year-olds, the seven- to eight-year-olds showed a preference for H*L in sentence-final focus. Furthermore, they used accent type to distinguish sentence-initial focus from sentence-initial topic in addition to phonetic cues.
  • Cho, T. (2004). Prosodically conditioned strengthening and vowel-to-vowel coarticulation in English. Journal of Phonetics, 32(2), 141-176. doi:10.1016/S0095-4470(03)00043-3.

    Abstract

    The goal of this study is to examine how the degree of vowel-to-vowel coarticulation varies as a function of prosodic factors such as nuclear-pitch accent (accented vs. unaccented), level of prosodic boundary (Prosodic Word vs. Intermediate Phrase vs. Intonational Phrase), and position-in-prosodic-domain (initial vs. final). It is hypothesized that vowels in prosodically stronger locations (e.g., in accented syllables and at a higher prosodic boundary) are not only coarticulated less with their neighboring vowels, but they also exert a stronger influence on their neighbors. Measurements of tongue position for English /a i/ over time were obtained with Carsten’s electromagnetic articulography. Results showed that vowels in prosodically stronger locations are coarticulated less with neighboring vowels, but do not exert a stronger influence on the articulation of neighboring vowels. An examination of the relationship between coarticulation and duration revealed that (a) accent-induced coarticulatory variation cannot be attributed to a duration factor and (b) some of the data with respect to boundary effects may be accounted for by the duration factor. This suggests that to the extent that prosodically conditioned coarticulatory variation is duration-independent, there is no absolute causal relationship from duration to coarticulation. It is proposed that prosodically conditioned V-to-V coarticulatory reduction is another type of strengthening that occurs in prosodically strong locations. The prosodically driven coarticulatory patterning is taken to be part of the phonetic signatures of the hierarchically nested structure of prosody.
  • Cho, T., & McQueen, J. M. (2011). Perceptual recovery from consonant-cluster simplification using language-specific phonological knowledge. Journal of Psycholinguistic Research, 40, 253-274. doi:10.1007/s10936-011-9168-0.

    Abstract

    Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete.
  • Choi, J., Cutler, A., & Broersma, M. (2017). Early development of abstract language knowledge: Evidence from perception-production transfer of birth-language memory. Royal Society Open Science, 4: 160660. doi:10.1098/rsos.160660.

    Abstract

    Children adopted early in life into another linguistic community typically forget their birth language but retain, unaware, relevant linguistic knowledge that may facilitate (re)learning of birth-language patterns. Understanding the nature of this knowledge can shed light on how language is acquired. Here, international adoptees from Korea with Dutch as their current language, and matched Dutch-native controls, provided speech production data on a Korean consonantal distinction unlike any Dutch distinctions, at the outset and end of an intensive perceptual training. The productions, elicited in a repetition task, were identified and rated by Korean listeners. Adoptees' production scores improved significantly more across the training period than control participants' scores, and, for adoptees only, relative production success correlated significantly with the rate of learning in perception (which had, as predicted, also surpassed that of the controls). Of the adoptee group, half had been adopted at 17 months or older (when talking would have begun), while half had been prelinguistic (under six months). The former group, with production experience, showed no advantage over the group without. Thus the adoptees' retained knowledge of Korean transferred from perception to production and appears to be abstract in nature rather than dependent on the amount of experience.
  • Choi, J., Broersma, M., & Cutler, A. (2017). Early phonology revealed by international adoptees' birth language retention. Proceedings of the National Academy of Sciences of the United States of America, 114(28), 7307-7312. doi:10.1073/pnas.1706405114.

    Abstract

    Until at least 6 mo of age, infants show good discrimination for familiar phonetic contrasts (i.e., those heard in the environmental language) and contrasts that are unfamiliar. Adult-like discrimination (significantly worse for nonnative than for native contrasts) appears only later, by 9–10 mo. This has been interpreted as indicating that infants have no knowledge of phonology until vocabulary development begins, after 6 mo of age. Recently, however, word recognition has been observed before age 6 mo, apparently decoupling the vocabulary and phonology acquisition processes. Here we show that phonological acquisition is also in progress before 6 mo of age. The evidence comes from retention of birth-language knowledge in international adoptees. In the largest ever such study, we recruited 29 adult Dutch speakers who had been adopted from Korea when young and had no conscious knowledge of Korean language at all. Half were adopted at age 3–5 mo (before native-specific discrimination develops) and half at 17 mo or older (after word learning has begun). In a short intensive training program, we observe that adoptees (compared with 29 matched controls) more rapidly learn tripartite Korean consonant distinctions without counterparts in their later-acquired Dutch, suggesting that the adoptees retained phonological knowledge about the Korean distinction. The advantage is equivalent for the younger-adopted and the older-adopted groups, and both groups not only acquire the tripartite distinction for the trained consonants but also generalize it to untrained consonants. Although infants younger than 6 mo can still discriminate unfamiliar phonetic distinctions, this finding indicates that native-language phonological knowledge is nonetheless being acquired at that age.
  • Cholin, J., Schiller, N. O., & Levelt, W. J. M. (2004). The preparation of syllables in speech production. Journal of Memory and Language, 50(1), 47-61. doi:10.1016/j.jml.2003.08.003.

    Abstract

    Models of speech production assume that syllables play a functional role in the process of word-form encoding in speech production. In this study, we investigate this claim and specifically provide evidence about the level at which syllables come into play. We report two studies using an odd-man-out variant of the implicit priming paradigm to examine the role of the syllable during the process of word formation. Our results show that this modified version of the implicit priming paradigm can trace the emergence of syllabic structure during spoken word generation. Comparing these results to prior syllable priming studies, we conclude that syllables emerge at the interface between phonological and phonetic encoding. The results are discussed in terms of the WEAVER++ model of lexical access.
  • Cholin, J., Dell, G. S., & Levelt, W. J. M. (2011). Planning and articulation in incremental word production: Syllable-frequency effects in English. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 109-122. doi:10.1037/a0021322.

    Abstract

    We investigated the role of syllables during speech planning in English by measuring syllable-frequency effects. So far, syllable-frequency effects in English have not been reported. English has poorly defined syllable boundaries, and thus the syllable might not function as a prominent unit in English speech production. Speakers produced either monosyllabic (Experiment 1) or disyllabic (Experiment 2–4) pseudowords as quickly as possible in response to symbolic cues. Monosyllabic targets consisted of either high- or low-frequency syllables, whereas disyllabic items contained either a 1st or 2nd syllable that was frequency-manipulated. Significant syllable-frequency effects were found in all experiments. Whereas previous findings for disyllables in Dutch and Spanish—languages with relatively clear syllable boundaries—showed effects of a frequency manipulation on 1st but not 2nd syllables, in our study English speakers were sensitive to the frequency of both syllables. We interpret this sensitivity as an indication that the production of English has more extensive planning scopes at the interface of phonetic encoding and articulation.
  • Christoffels, I. K., Ganushchak, L. Y., & Koester, D. (2013). Language conflict in translation; An ERP study of translation production. Journal of Cognitive Psychology, 25, 646-664. doi:10.1080/20445911.2013.821127.

    Abstract

    Although most bilinguals can translate with relative ease, the underlying neuro-cognitive processes are poorly understood. Using event-related brain potentials (ERPs) we investigated the temporal course of word translation. Participants translated words from and to their first (L1, Dutch) and second (L2, English) language while ERPs were recorded. Interlingual homographs (IHs) were included to introduce language conflict. IHs share orthographic form but have different meanings in L1 and L2 (e.g., room in Dutch refers to cream). Results showed that the brain distinguished between translation directions as early as 200 ms after word presentation: the P2 amplitudes were more positive in the L1L2 translation direction. The N400 was also modulated by translation direction, with more negative amplitudes in the L2L1 translation direction. Furthermore, the IHs were translated more slowly, induced more errors, and elicited more negative N400 amplitudes than control words. In a naming experiment, participants read aloud the same words in L1 or L2 while ERPs were recorded. Results showed no effect of either IHs or language, suggesting that task schemas may be crucially related to language control in translation. Furthermore, translation appears to involve conceptual processing in both translation directions, and the task goal appears to influence how words are processed.

    Files private

    Request files
  • Chu, M., & Kita, S. (2011). The nature of gestures’ beneficial role in spatial problem solving. Journal of Experimental Psychology: General, 140, 102-116. doi:10.1037/a0021790.

    Abstract

    Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We conclude that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations.
  • Claus, A. (2004). Access management system. Language Archive Newsletter, 1(2), 5.
  • Cleary, R. A., Poliakoff, E., Galpin, A., Dick, J. P., & Holler, J. (2011). An investigation of co-speech gesture production during action description in Parkinson’s disease. Parkinsonism & Related Disorders, 17, 753-756. doi:10.1016/j.parkreldis.2011.08.001.

    Abstract

    Methods
    The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area.
    Results
    Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced.
    Conclusions
    This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD.
  • Coco, M. I., Araujo, S., & Petersson, K. M. (2017). Disentangling stimulus plausibility and contextual congruency: Electro-physiological evidence for differential cognitive dynamics. Neuropsychologia, 96, 150-163. doi:10.1016/j.neuropsychologia.2016.12.008.

    Abstract

    Expectancy mechanisms are routinely used by the cognitive system in stimulus processing and in anticipation of appropriate responses. Electrophysiology research has documented negative shifts of brain activity when expectancies are violated within a local stimulus context (e.g., reading an implausible word in a sentence) or more globally between consecutive stimuli (e.g., a narrative of images with an incongruent end). In this EEG study, we examine the interaction between expectancies operating at the level of stimulus plausibility and at more global level of contextual congruency to provide evidence for, or against, a disassociation of the underlying processing mechanisms. We asked participants to verify the congruency of pairs of cross-modal stimuli (a sentence and a scene), which varied in plausibility. ANOVAs on ERP amplitudes in selected windows of interest show that congruency violation has longer-lasting (from 100 to 500 ms) and more widespread effects than plausibility violation (from 200 to 400 ms). We also observed critical interactions between these factors, whereby incongruent and implausible pairs elicited stronger negative shifts than their congruent counterpart, both early on (100–200 ms) and between 400–500 ms. Our results suggest that the integration mechanisms are sensitive to both global and local effects of expectancy in a modality independent manner. Overall, we provide novel insights into the interdependence of expectancy during meaning integration of cross-modal stimuli in a verification task
  • Cohen, E. (2011). Broadening the critical perspective on supernatural punishment theories. Religion, Brain & Behavior, 1(1), 70-72. doi:10.1080/2153599X.2011.558709.
  • Cohen, E., Burdett, E., Knight, N., & Barrett, J. (2011). Cross-cultural similarities and differences in person-body reasoning: Experimental evidence from the United Kingdom and Brazilian Amazon. Cognitive Science, 35, 1282-1304. doi:10.1111/j.1551-6709.2011.01172.x.

    Abstract

    We report the results of a cross-cultural investigation of person-body reasoning in the United Kingdom and northern Brazilian Amazon (Marajo´ Island). The study provides evidence that directly bears upon divergent theoretical claims in cognitive psychology and anthropology, respectively, on the cognitive origins and cross-cultural incidence of mind-body dualism. In a novel reasoning task, we found that participants across the two sample populations parsed a wide range of capacities similarly in terms of the capacities’ perceived anchoring to bodily function. Patterns of reasoning concerning the respective roles of physical and biological properties in sustaining various capacities did vary between sample populations, however. Further, the data challenge prior ad-hoc categorizations in the empirical literature on the developmental origins of and cognitive constraints on psycho-physical reasoning (e.g., in afterlife concepts). We suggest cross-culturally validated categories of ‘‘Body Dependent’’ and ‘‘Body Independent’’ items for future developmental and cross-cultural research in this emerging area.
  • Cohen, E., & Haun, D. B. M. (2013). The development of tag-based cooperation via a socially acquired trait. Evolution and Human Behavior, 24, 230-235. doi:10.1016/j.evolhumbehav.2013.02.001.

    Abstract

    Recent theoretical models have demonstrated that phenotypic traits can support the non-random assortment of cooperators in a population, thereby permitting the evolution of cooperation. In these “tag-based models”, cooperators modulate cooperation according to an observable and hard-to-fake trait displayed by potential interaction partners. Socially acquired vocalizations in general, and speech accent among humans in particular, are frequently proposed as hard to fake and hard to hide traits that display sufficient cross-populational variability to reliably guide such social assortment in fission–fusion societies. Adults’ sensitivity to accent variation in social evaluation and decisions about cooperation is well-established in sociolinguistic research. The evolutionary and developmental origins of these biases are largely unknown, however. Here, we investigate the influence of speech accent on 5–10-year-old children's developing social and cooperative preferences across four Brazilian Amazonian towns. Two sites have a single dominant accent, and two sites have multiple co-existing accent varieties. We found that children's friendship and resource allocation preferences were guided by accent only in sites characterized by accent heterogeneity. Results further suggest that this may be due to a more sensitively tuned ear for accent variation. The demonstrated local-accent preference did not hold in the face of personal cost. Results suggest that mechanisms guiding tag-based assortment are likely tuned according to locally relevant tag-variation.

    Additional information

    Cohen_Suppl_Mat_2013.docx
  • Comasco, E., Schijven, D., de Maeyer, H., Vrettou, M., Nylander, I., Sundström-Poromaa, I., & Olivier, J. D. A. (2019). Constitutive serotonin transporter reduction resembles maternal separation with regard to stress-related gene expression. ACS Chemical Neuroscience, 10, 3132-3142. doi:10.1021/acschemneuro.8b00595.

    Abstract

    Interactive effects between allelic variants of the serotonin transporter (5-HTT) promoter-linked polymorphic region (5-HTTLPR) and stressors on depression symptoms have been documented, as well as questioned, by meta-analyses. Translational models of constitutive 5-htt reduction and experimentally controlled stressors often led to inconsistent behavioral and molecular findings and often did not include females. The present study sought to investigate the effect of 5-htt genotype, maternal separation, and sex on the expression of stress-related candidate genes in the rat hippocampus and frontal cortex. The mRNA expression levels of Avp, Pomc, Crh, Crhbp, Crhr1, Bdnf, Ntrk2, Maoa, Maob, and Comt were assessed in the hippocampus and frontal cortex of 5-htt ± and 5-htt +/+ male and female adult rats exposed, or not, to daily maternal separation for 180 min during the first 2 postnatal weeks. Gene- and brain region-dependent, but sex-independent, interactions between 5-htt genotype and maternal separation were found. Gene expression levels were higher in 5-htt +/+ rats not exposed to maternal separation compared with the other experimental groups. Maternal separation and 5-htt +/− genotype did not yield additive effects on gene expression. Correlative relationships, mainly positive, were observed within, but not across, brain regions in all groups except in non-maternally separated 5-htt +/+ rats. Gene expression patterns in the hippocampus and frontal cortex of rats exposed to maternal separation resembled the ones observed in rats with reduced 5-htt expression regardless of sex. These results suggest that floor effects of 5-htt reduction and maternal separation might explain inconsistent findings in humans and rodents
  • Connell, L., Cai, Z. G., & Holler, J. (2013). Do you see what I'm singing? Visuospatial movement biases pitch perception. Brain and Cognition, 81, 124-130. doi:10.1016/j.bandc.2012.09.005.

    Abstract

    The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.
  • Corps, R. E., Pickering, M. J., & Gambi, C. (2019). Predicting turn-ends in discourse context. Language, Cognition and Neuroscience, 34(5), 615-627. doi:10.1080/23273798.2018.1552008.

    Abstract

    Research suggests that during conversation, interlocutors coordinate their utterances by predicting the speaker’s forthcoming utterance and its end. In two experiments, we used a button-pressing task, in which participants pressed a button when they thought a speaker reached the end of their utterance, to investigate what role the wider discourse plays in turn-end prediction. Participants heard two-utterance sequences, in which the content of the second utterance was or was not constrained by the content of the first. In both experiments, participants responded earlier, but not more precisely, when the first utterance was constraining rather than unconstraining. Response times and precision were unaffected by whether they listened to dialogues or monologues (Experiment 1) and by whether they read the first utterance out loud or silently (Experiment 2), providing no indication that activation of production mechanisms facilitates prediction. We suggest that content predictions aid comprehension but not turn-end prediction.

    Additional information

    plcp_a_1552008_sm1646.pdf
  • Cortázar-Chinarro, M., Lattenkamp, E. Z., Meyer-Lucht, Y., Luquet, E., Laurila, A., & Höglund, J. (2017). Drift, selection, or migration? Processes affecting genetic differentiation and variation along a latitudinal gradient in an amphibian. BMC Evolutionary Biology, 17: 189. doi:10.1186/s12862-017-1022-z.

    Abstract

    Past events like fluctuations in population size and post-glacial colonization processes may influence the relative importance of genetic drift, migration and selection when determining the present day patterns of genetic variation. We disentangle how drift, selection and migration shape neutral and adaptive genetic variation in 12 moor frog populations along a 1700 km latitudinal gradient. We studied genetic differentiation and variation at a MHC exon II locus and a set of 18 microsatellites.
    Results

    Using outlier analyses, we identified the MHC II exon 2 (corresponding to the β-2 domain) locus and one microsatellite locus (RCO8640) to be subject to diversifying selection, while five microsatellite loci showed signals of stabilizing selection among populations. STRUCTURE and DAPC analyses on the neutral microsatellites assigned populations to a northern and a southern cluster, reflecting two different post-glacial colonization routes found in previous studies. Genetic variation overall was lower in the northern cluster. The signature of selection on MHC exon II was weaker in the northern cluster, possibly as a consequence of smaller and more fragmented populations.
    Conclusion

    Our results show that historical demographic processes combined with selection and drift have led to a complex pattern of differentiation along the gradient where some loci are more divergent among populations than predicted from drift expectations due to diversifying selection, while other loci are more uniform among populations due to stabilizing selection. Importantly, both overall and MHC genetic variation are lower at northern latitudes. Due to lower evolutionary potential, the low genetic variation in northern populations may increase the risk of extinction when confronted with emerging pathogens and climate change.
  • Cousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V. and 50 moreCousminer, D. L., Berry, D. J., Timpson, N. J., Ang, W., Thiering, E., Byrne, E. M., Taal, H. R., Huikari, V., Bradfield, J. P., Kerkhof, M., Groen-Blokhuis, M. M., Kreiner-Møller, E., Marinelli, M., Holst, C., Leinonen, J. T., Perry, J. R. B., Surakka, I., Pietiläinen, O., Kettunen, J., Anttila, V., Kaakinen, M., Sovio, U., Pouta, A., Das, S., Lagou, V., Power, C., Prokopenko, I., Evans, D. M., Kemp, J. P., St Pourcain, B., Ring, S., Palotie, A., Kajantie, E., Osmond, C., Lehtimäki, T., Viikari, J. S., Kähönen, M., Warrington, N. M., Lye, S. J., Palmer, L. J., Tiesler, C. M. T., Flexeder, C., Montgomery, G. W., Medland, S. E., Hofman, A., Hakonarson, H., Guxens, M., Bartels, M., Salomaa, V., Murabito, J. M., Kaprio, J., Sørensen, T. I. A., Ballester, F., Bisgaard, H., Boomsma, D. I., Koppelman, G. H., Grant, S. F. A., Jaddoe, V. W. V., Martin, N. G., Heinrich, J., Pennell, C. E., Raitakari, O. T., Eriksson, J. G., Smith, G. D., Hyppönen, E., Järvelin, M.-R., McCarthy, M. I., Ripatti, S., Widén, E., Consortium ReproGen, & Consortium Early Growth Genetics (EGG) (2013). Genome-wide association and longitudinal analyses reveal genetic loci linking pubertal height growth, pubertal timing and childhood adiposity. Human Molecular Genetics, 22(13), 2735-2747. doi:10.1093/hmg/ddt104.

    Abstract

    The pubertal height growth spurt is a distinctive feature of childhood growth reflecting both the central onset of puberty and local growth factors. Although little is known about the underlying genetics, growth variability during puberty correlates with adult risks for hormone-dependent cancer and adverse cardiometabolic health. The only gene so far associated with pubertal height growth, LIN28B, pleiotropically influences childhood growth, puberty and cancer progression, pointing to shared underlying mechanisms. To discover genetic loci influencing pubertal height and growth and to place them in context of overall growth and maturation, we performed genome-wide association meta-analyses in 18 737 European samples utilizing longitudinally collected height measurements. We found significant associations (P < 1.67 × 10(-8)) at 10 loci, including LIN28B. Five loci associated with pubertal timing, all impacting multiple aspects of growth. In particular, a novel variant correlated with expression of MAPK3, and associated both with increased prepubertal growth and earlier menarche. Another variant near ADCY3-POMC associated with increased body mass index, reduced pubertal growth and earlier puberty. Whereas epidemiological correlations suggest that early puberty marks a pathway from rapid prepubertal growth to reduced final height and adult obesity, our study shows that individual loci associating with pubertal growth have variable longitudinal growth patterns that may differ from epidemiological observations. Overall, this study uncovers part of the complex genetic architecture linking pubertal height growth, the timing of puberty and childhood obesity and provides new information to pinpoint processes linking these traits.
  • Cozijn, R., Noordman, L. G., & Vonk, W. (2011). Propositional integration and world-knowledge inference: Processes in understanding because sentences. Discourse Processes, 48, 475-500. doi:10.1080/0163853X.2011.594421.

    Abstract

    The issue addressed in this study is whether propositional integration and world-knowledge inference can be distinguished as separate processes during the comprehension of Dutch omdat (because) sentences. “Propositional integration” refers to the process by which the reader establishes the type of relation between two clauses or sentences. “World-knowledge inference” refers to the process of deriving the general causal relation and checking it against the reader's world knowledge. An eye-tracking experiment showed that the presence of the conjunction speeds up the processing of the words immediately following the conjunction, and slows down the processing of the sentence final words in comparison to the absence of the conjunction. A second, subject-paced reading experiment replicated the reading time findings, and the results of a verification task confirmed that the effect at the end of the sentence was due to inferential processing. The findings evidence integrative processing and inferential processing, respectively.
  • Cozijn, R., Commandeur, E., Vonk, W., & Noordman, L. G. (2011). The time course of the use of implicit causality information in the processing of pronouns: A visual world paradigm study. Journal of Memory and Language, 64, 381-403. doi:10.1016/j.jml.2011.01.001.

    Abstract

    Several theoretical accounts have been proposed with respect to the issue how quickly the implicit causality verb bias affects the understanding of sentences such as “John beat Pete at the tennis match, because he had played very well”. They can be considered as instances of two viewpoints: the focusing and the integration account. The focusing account claims that the bias should be manifest soon after the verb has been processed, whereas the integration account claims that the interpretation is deferred until disambiguating information is encountered. Up to now, this issue has remained unresolved because materials or methods have failed to address it conclusively. We conducted two experiments that exploited the visual world paradigm and ambiguous pronouns in subordinate because clauses. The first experiment presented implicit causality sentences with the task to resolve the ambiguous pronoun. To exclude strategic processing, in the second experiment, the task was to answer simple comprehension questions and only a minority of the sentences contained implicit causality verbs. In both experiments, the implicit causality of the verb had an effect before the disambiguating information was available. This result supported the focusing account.
  • Cristia, A., McGuire, G. L., Seidl, A., & Francis, A. L. (2011). Effects of the distribution of acoustic cues on infants' perception of sibilants. Journal of Phonetics, 39, 388-402. doi:10.1016/j.wocn.2011.02.004.

    Abstract

    A current theoretical view proposes that infants converge on the speech categories of their native language by attending to frequency distributions that occur in the acoustic input. To date, the only empirical support for this statistical learning hypothesis comes from studies where a single, salient dimension was manipulated. Additional evidence is sought here, by introducing a less salient pair of categories supported by multiple cues. We exposed English-learning infants to a multi-cue bidimensional grid ranging between retroflex and alveolopalatal sibilants in prevocalic position. This contrast is substantially more difficult according to previous cross-linguistic and perceptual research, and its perception is driven by cues in both the consonantal and the following vowel portions. Infants heard one of two distributions (flat, or with two peaks), and were tested with sounds varying along only one dimension. Infants' responses differed depending on the familiarization distribution, and their performance was equally good for the vocalic and the frication dimension, lending some support to the statistical hypothesis even in this harder learning situation. However, learning was restricted to the retroflex category, and a control experiment showed that lack of learning for the alveolopalatal category was not due to the presence of a competing category. Thus, these results contribute fundamental evidence on the extent and limitations of the statistical hypothesis as an explanation for infants' perceptual tuning.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A. (2011). Fine-grained variation in caregivers' speech predicts their infants' discrimination. Journal of the Acoustical Society of America, 129, 3271-3280. doi:10.1121/1.3562562.

    Abstract

    Within the debate on the mechanisms underlying infants’ perceptual acquisition, one hypothesis proposes that infants’ perception is directly affected by the acoustic implementation of sound categories in the speech they hear. In consonance with this view, the present study shows that individual variation in fine-grained, subphonemic aspects of the acoustic realization of /s/ in caregivers’ speech predicts infants’ discrimination of this sound from the highly similar /∫/, suggesting that learning based on acoustic cue distributions may indeed drive natural phonological acquisition.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A., Seidl, A., & Gerken, L. (2011). Learning classes of sounds in infancy. University of Pennsylvania Working Papers in Linguistics, 17, 9.

    Abstract

    Adults' phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptual bias could be a given, for example, the result of innately guided learning; alternatively, it could be due to human learners’ experience with sounds. Using artificial grammars, we investigated whether such a bias arises in development, or whether it is present as soon as infants can learn phonotactics. Seven-month-old English-learning infants fail to generalize a phonotactic pattern involving fricatives and nasals, which does not form a coherent phonetic group, but succeed with the natural class of oral and nasal stops. In this paper, we report an experiment that explored whether those results also follow in a cohort of 4-month-olds. Unlike the older infants, 4-month-olds were able to generalize both groups, suggesting that the perceptual bias that makes phonotactic constraints on natural classes easier to learn is likely the effect of experience.
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Croijmans, I., Speed, L., Arshamian, A., & Majid, A. (2019). Measuring the multisensory imagery of wine: The Vividness of Wine Imagery Questionnaire. Multisensory Research, 32(3), 179-195. doi:10.1163/22134808-20191340.

    Abstract

    When we imagine objects or events, we often engage in multisensory mental imagery. Yet, investigations of mental imagery have typically focused on only one sensory modality — vision. One reason for this is that the most common tool for the measurement of imagery, the questionnaire, has been restricted to unimodal ratings of the object. We present a new mental imagery questionnaire that measures multisensory imagery. Specifically, the newly developed Vividness of Wine Imagery Questionnaire (VWIQ) measures mental imagery of wine in the visual, olfactory, and gustatory modalities. Wine is an ideal domain to explore multisensory imagery because wine drinking is a multisensory experience, it involves the neglected chemical senses (smell and taste), and provides the opportunity to explore the effect of experience and expertise on imagery (from wine novices to experts). The VWIQ questionnaire showed high internal consistency and reliability, and correlated with other validated measures of imagery. Overall, the VWIQ may serve as a useful tool to explore mental imagery for researchers, as well as individuals in the wine industry during sommelier training and evaluation of wine professionals.
  • Cronin, K. A., Van Leeuwen, E. J. C., Mulenga, I. C., & Bodamer, M. D. (2011). Behavioral response of a chimpanzee mother toward her dead infant. American Journal of Primatology, 73(5), 415-421. doi:10.1002/ajp.20927.

    Abstract

    The mother-offspring bond is one of the strongest and most essential social bonds. Following is a detailed behavioral report of a female chimpanzee two days after her 16-month-old infant died, on the first day that the mother is observed to create distance between her and the corpse. A series of repeated approaches and retreats to and from the body are documented, along with detailed accounts of behaviors directed toward the dead infant by the mother and other group members. The behavior of the mother toward her dead infant not only highlights the maternal contribution to the mother-infant relationship but also elucidates the opportunities chimpanzees have to learn about the sensory cues associated with death, and the implications of death for the social environment.
  • Cronin, K. A. (2013). [Review of the book Chimpanzees of the Lakeshore: Natural history and culture at Mahale by Toshisada Nishida]. Animal Behaviour, 85, 685-686. doi:10.1016/j.anbehav.2013.01.001.

    Abstract

    First paragraph: Motivated by his quest to characterize the society of the last common ancestor of humans and other great apes, Toshisada Nishida set out as a graduate student to the Mahale Mountains on the eastern shore of Lake Tanganyika, Tanzania. This book is a story of his 45 years with the Mahale chimpanzees, or as he calls it, their ethnography. Beginning with his accounts of meeting the Tongwe people and the challenges of provisioning the chimpanzees for habituation, Nishida reveals how he slowly unravelled the unit group and community basis of chimpanzee social organization. The book begins and ends with a feeling of chronological order, starting with his arrival at Mahale and ending with an eye towards the future, with concrete recommendations for protecting wild chimpanzees. However, the bulk of the book is topically organized with chapters on feeding behaviour, growth and development, play and exploration, communication, life histories, sexual strategies, politics and culture.
  • Cuskley, C., Dingemanse, M., Kirby, S., & Van Leeuwen, T. M. (2019). Cross-modal associations and synesthesia: Categorical perception and structure in vowel–color mappings in a large online sample. Behavior Research Methods, 51, 1651-1675. doi:10.3758/s13428-019-01203-7.

    Abstract

    We report associations between vowel sounds, graphemes, and colours collected online from over 1000 Dutch speakers. We provide open materials including a Python implementation of the structure measure, and code for a single page web application to run simple cross-modal tasks. We also provide a full dataset of colour-vowel associations from 1164 participants, including over 200 synaesthetes identified using consistency measures. Our analysis reveals salient patterns in cross-modal associations, and introduces a novel measure of isomorphism in cross-modal mappings. We find that while acoustic features of vowels significantly predict certain mappings (replicating prior work), both vowel phoneme category and grapheme category are even better predictors of colour choice. Phoneme category is the best predictor of colour choice overall, pointing to the importance of phonological representations in addition to acoustic cues. Generally, high/front vowels are lighter, more green, and more yellow than low/back vowels. Synaesthetes respond more strongly on some dimensions, choosing lighter and more yellow colours for high and mid front vowels than non-synaesthetes. We also present a novel measure of cross-modal mappings adapted from ecology, which uses a simulated distribution of mappings to measure the extent to which participants' actual mappings are structured isomorphically across modalities. Synaesthetes have mappings that tend to be more structured than non-synaesthetes, and more consistent colour choices across trials correlate with higher structure scores. Nevertheless, the large majority (~70%) of participants produce structured mappings, indicating that the capacity to make isomorphically structured mappings across distinct modalities is shared to a large extent, even if the exact nature of mappings varies across individuals. Overall, this novel structure measure suggests a distribution of structured cross-modal association in the population, with synaesthetes on one extreme and participants with unstructured associations on the other.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A., Norris, D., & McQueen, J. M. (1994). Modelling lexical access from continuous speech input. Dokkyo International Review, 7, 193-215.

    Abstract

    The recognition of speech involves the segmentation of continuous utterances into their component words. Cross-linguistic evidence is briefly reviewed which suggests that although there are language-specific solutions to this segmentation problem, they have one thing in common: they are all based on language rhythm. In English, segmentation is stress-based: strong syllables are postulated to be the onsets of words. Segmentation, however, can also be achieved by a process of competition between activated lexical hypotheses, as in the Shortlist model. A series of experiments is summarised showing that segmentation of continuous speech depends on both lexical competition and a metrically-guided procedure. In the final section, the implementation of metrical segmentation in the Shortlist model is described: the activation of lexical hypotheses matching strong syllables in the input is boosted and that of hypotheses mismatching strong syllables in the input is penalised.
  • Cutler, A., & Otake, T. (1994). Mora or phoneme? Further evidence for language-specific listening. Journal of Memory and Language, 33, 824-844. doi:10.1006/jmla.1994.1039.

    Abstract

    Japanese listeners detect speech sound targets which correspond precisely to a mora (a phonological unit which is the unit of rhythm in Japanese) more easily than targets which do not. English listeners detect medial vowel targets more slowly than consonants. Six phoneme detection experiments investigated these effects in both subject populations, presented with native- and foreign-language input. Japanese listeners produced faster and more accurate responses to moraic than to nonmoraic targets both in Japanese and, where possible, in English; English listeners responded differently. The detection disadvantage for medial vowels appeared with English listeners both in English and in Japanese; again, Japanese listeners responded differently. Some processing operations which listeners apply to speech input are language-specific; these language-specific procedures, appropriate for listening to input in the native language, may be applied to foreign-language input irrespective of whether they remain appropriate.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.

Share this page