Publications

Displaying 101 - 200 of 880
  • Chu, M., & Kita, S. (2016). Co-thought and Co-speech Gestures Are Generated by the Same Action Generation Process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 257-270. doi:10.1037/xlm0000168.

    Abstract

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments 1 and 2). This suggests that the 2 types of gestures are generated from the same process. We then investigated whether both types of gestures can be generated from the representational use of the action generation process that also generates purposeful actions that have a direct physical impact on the world, such as manipulating an object or locomotion (the action generation hypothesis). To this end, we examined the effect of object affordances on the production of both types of gestures (Experiments 3 and 4). We found that individuals produced co-thought and co-speech gestures more often when the stimulus objects afforded action (objects with a smooth surface) than when they did not (objects with a spiky surface). These results support the action generation hypothesis for representational gestures. However, our findings are incompatible with the hypothesis that co-speech representational gestures are solely generated from the speech production process (the speech production hypothesis).
  • Chu, M., Meyer, A. S., Foulkes, L., & Kita, S. (2014). Individual differences in frequency and saliency of speech-accompanying gestures: The role of cognitive abilities and empathy. Journal of Experimental Psychology: General, 143, 694-709. doi:10.1037/a0033861.

    Abstract

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. Higher frequency of representational gestures was related to poorer visual and spatial working memory, spatial transformation ability, and conceptualization ability; higher frequency of conduit gestures was related to poorer visual working memory, conceptualization ability, and higher levels of empathy; and higher frequency of palm-revealing gestures was related to higher levels of empathy. The saliency of all gestures was positively related to level of empathy. These results demonstrate that cognitive abilities and empathy levels are related to individual differences in gesture frequency and saliency
  • Chu, M., & Hagoort, P. (2014). Synchronization of speech and gesture: Evidence for interaction in action. Journal of Experimental Psychology: General, 143(4), 1726-1741. doi:10.1037/a0036281.

    Abstract

    Language and action systems are highly interlinked. A critical piece of evidence is that speech and its accompanying gestures are tightly synchronized. Five experiments were conducted to test 2 hypotheses about the synchronization of speech and gesture. According to the interactive view, there is continuous information exchange between the gesture and speech systems, during both their planning and execution phases. According to the ballistic view, information exchange occurs only during the planning phases of gesture and speech, but the 2 systems become independent once their execution has been initiated. In all experiments, participants were required to point to and/or name a light that had just lit up. Virtual reality and motion tracking technologies were used to disrupt their gesture or speech execution. Participants delayed their speech onset when their gesture was disrupted. They did so even when their gesture was disrupted at its late phase and even when they received only the kinesthetic feedback of their gesture. Also, participants prolonged their gestures when their speech was disrupted. These findings support the interactive view and add new constraints on models of speech and gesture production
  • Chwilla, D., Hagoort, P., & Brown, C. M. (1998). The mechanism underlying backward priming in a lexical decision task: Spreading activation versus semantic matching. Quarterly Journal of Experimental Psychology, 51A(3), 531-560. doi:10.1080/713755773.

    Abstract

    Koriat (1981) demonstrated that an association from the target to a preceding prime, in the absence of an association from the prime to the target, facilitates lexical decision and referred to this effect as "backward priming". Backward priming is of relevance, because it can provide information about the mechanism underlying semantic priming effects. Following Neely (1991), we distinguish three mechanisms of priming: spreading activation, expectancy, and semantic matching/integration. The goal was to determine which of these mechanisms causes backward priming, by assessing effects of backward priming on a language-relevant ERP component, the N400, and reaction time (RT). Based on previous work, we propose that the N400 priming effect reflects expectancy and semantic matching/integration, but in contrast with RT does not reflect spreading activation. Experiment 1 shows a backward priming effect that is qualitatively similar for the N400 and RT in a lexical decision task. This effect was not modulated by an ISI manipulation. Experiment 2 clarifies that the N400 backward priming effect reflects genuine changes in N400 amplitude and cannot be ascribed to other factors. We will argue that these backward priming effects cannot be due to expectancy but are best accounted for in terms of semantic matching/integration.
  • Collins, J. (2016). The role of language contact in creating correlations between humidity and tone. Journal of Language Evolution, 46-52. doi:10.1093/jole/lzv012.
  • Comasco, E., Schijven, D., de Maeyer, H., Vrettou, M., Nylander, I., Sundström-Poromaa, I., & Olivier, J. D. A. (2019). Constitutive serotonin transporter reduction resembles maternal separation with regard to stress-related gene expression. ACS Chemical Neuroscience, 10, 3132-3142. doi:10.1021/acschemneuro.8b00595.

    Abstract

    Interactive effects between allelic variants of the serotonin transporter (5-HTT) promoter-linked polymorphic region (5-HTTLPR) and stressors on depression symptoms have been documented, as well as questioned, by meta-analyses. Translational models of constitutive 5-htt reduction and experimentally controlled stressors often led to inconsistent behavioral and molecular findings and often did not include females. The present study sought to investigate the effect of 5-htt genotype, maternal separation, and sex on the expression of stress-related candidate genes in the rat hippocampus and frontal cortex. The mRNA expression levels of Avp, Pomc, Crh, Crhbp, Crhr1, Bdnf, Ntrk2, Maoa, Maob, and Comt were assessed in the hippocampus and frontal cortex of 5-htt ± and 5-htt +/+ male and female adult rats exposed, or not, to daily maternal separation for 180 min during the first 2 postnatal weeks. Gene- and brain region-dependent, but sex-independent, interactions between 5-htt genotype and maternal separation were found. Gene expression levels were higher in 5-htt +/+ rats not exposed to maternal separation compared with the other experimental groups. Maternal separation and 5-htt +/− genotype did not yield additive effects on gene expression. Correlative relationships, mainly positive, were observed within, but not across, brain regions in all groups except in non-maternally separated 5-htt +/+ rats. Gene expression patterns in the hippocampus and frontal cortex of rats exposed to maternal separation resembled the ones observed in rats with reduced 5-htt expression regardless of sex. These results suggest that floor effects of 5-htt reduction and maternal separation might explain inconsistent findings in humans and rodents
  • Cooper, R. P., & Guest, O. (2014). Implementations are not specifications: Specification, replication and experimentation in computational cognitive modeling. Cognitive Systems Research, 27, 42-49. doi:10.1016/j.cogsys.2013.05.001.

    Abstract

    Contemporary methods of computational cognitive modeling have recently been criticized by Addyman and French (2012) on the grounds that they have not kept up with developments in computer technology and human–computer interaction. They present a manifesto for change according to which, it is argued, modelers should devote more effort to making their models accessible, both to non-modelers (with an appropriate easy-to-use user interface) and modelers alike. We agree that models, like data, should be freely available according to the normal standards of science, but caution against confusing implementations with specifications. Models may embody theories, but they generally also include implementation assumptions. Cognitive modeling methodology needs to be sensitive to this. We argue that specification, replication and experimentation are methodological approaches that can address this issue.
  • Corps, R. E., Pickering, M. J., & Gambi, C. (2019). Predicting turn-ends in discourse context. Language, Cognition and Neuroscience, 34(5), 615-627. doi:10.1080/23273798.2018.1552008.

    Abstract

    Research suggests that during conversation, interlocutors coordinate their utterances by predicting the speaker’s forthcoming utterance and its end. In two experiments, we used a button-pressing task, in which participants pressed a button when they thought a speaker reached the end of their utterance, to investigate what role the wider discourse plays in turn-end prediction. Participants heard two-utterance sequences, in which the content of the second utterance was or was not constrained by the content of the first. In both experiments, participants responded earlier, but not more precisely, when the first utterance was constraining rather than unconstraining. Response times and precision were unaffected by whether they listened to dialogues or monologues (Experiment 1) and by whether they read the first utterance out loud or silently (Experiment 2), providing no indication that activation of production mechanisms facilitates prediction. We suggest that content predictions aid comprehension but not turn-end prediction.

    Additional information

    plcp_a_1552008_sm1646.pdf
  • Costa, A., Cutler, A., & Sebastian-Galles, N. (1998). Effects of phoneme repertoire on phoneme decision. Perception and Psychophysics, 60, 1022-1031.

    Abstract

    In three experiments, listeners detected vowel or consonant targets in lists of CV syllables constructed from five vowels and five consonants. Responses were faster in a predictable context (e.g., listening for a vowel target in a list of syllables all beginning with the same consonant) than in an unpredictable context (e.g., listening for a vowel target in a list of syllables beginning with different consonants). In Experiment 1, the listeners’ native language was Dutch, in which vowel and consonant repertoires are similar in size. The difference between predictable and unpredictable contexts was comparable for vowel and consonant targets. In Experiments 2 and 3, the listeners’ native language was Spanish, which has four times as many consonants as vowels; here effects of an unpredictable consonant context on vowel detection were significantly greater than effects of an unpredictable vowel context on consonant detection. This finding suggests that listeners’ processing of phonemes takes into account the constitution of their language’s phonemic repertoire and the implications that this has for contextual variability.
  • Cousijn, H., Eissing, M., Fernández, G., Fisher, S. E., Franke, B., Zwers, M., Harrison, P. J., & Arias-Vasquez, A. (2014). No effect of schizophrenia risk genes MIR137, TCF4, and ZNF804A on macroscopic brain structure. Schizophrenia Research, 159, 329-332. doi:10.1016/j.schres.2014.08.007.

    Abstract

    Single nucleotide polymorphisms (SNPs) within the MIR137, TCF4, and ZNF804A genes show genome-wide association to schizophrenia. However, the biological basis for the associations is unknown. Here, we tested the effects of these genes on brain structure in 1300 healthy adults. Using volumetry and voxel-based morphometry, neither gene-wide effects—including the combined effect of the genes—nor single SNP effects—including specific psychosis risk SNPs—were found on total brain volume, grey matter, white matter, or hippocampal volume. These results suggest that the associations between these risk genes and schizophrenia are unlikely to be mediated via effects on macroscopic brain structure.
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Cristia, A., Minagawa-Kawai, Y., Egorova, N., Gervain, J., Filippin, L., Cabrol, D., & Dupoux, E. (2014). Neural correlates of infant accent discrimination: An fNIRS study. Developmental Science, 17(4), 628-635. doi:10.1111/desc.12160.

    Abstract

    The present study investigated the neural correlates of infant discrimination of very similar linguistic varieties (Quebecois and Parisian French) using functional Near InfraRed Spectroscopy. In line with previous behavioral and electrophysiological data, there was no evidence that 3-month-olds discriminated the two regional accents, whereas 5-month-olds did, with the locus of discrimination in left anterior perisylvian regions. These neuroimaging results suggest that a developing language network relying crucially on left perisylvian cortices sustains infants' discrimination of similar linguistic varieties within this early period of infancy.

    Files private

    Request files
  • Cristia, A., Seidl, A., Junge, C., Soderstrom, M., & Hagoort, P. (2014). Predicting individual variation in language from infant speech perception measures. Child development, 85(4), 1330-1345. doi:10.1111/cdev.12193.

    Abstract

    There are increasing reports that individual variation in behavioral and neurophysiological measures of infant speech processing predicts later language outcomes, and specifically concurrent or subsequent vocabulary size. If such findings are held up under scrutiny, they could both illuminate theoretical models of language development and contribute to the prediction of communicative disorders. A qualitative, systematic review of this emergent literature illustrated the variety of approaches that have been used and highlighted some conceptual problems regarding the measurements. A quantitative analysis of the same data established that the bivariate relation was significant, with correlations of similar strength to those found for well-established nonlinguistic predictors of language. Further exploration of infant speech perception predictors, particularly from a methodological perspective, is recommended.
  • Cristia, A., & Seidl, A. (2014). The hyperarticulation hypothesis of infant-directed speech. Journal of Child Language, 41(4), 913-934. doi:10.1017/S0305000912000669.

    Abstract

    Typically, the point vowels [i,ɑ,u] are acoustically more peripheral in infant-directed speech (IDS) compared to adult-directed speech (ADS). If caregivers seek to highlight lexically relevant contrasts in IDS, then two sounds that are contrastive should become more distinct, whereas two sounds that are surface realizations of the same underlying sound category should not. To test this prediction, vowels that are phonemically contrastive ([i-ɪ] and [eɪ-ε]), vowels that map onto the same underlying category ([æ- ] and [ε- ]), and the point vowels [i,ɑ,u] were elicited in IDS and ADS by American English mothers of two age groups of infants (four- and eleven-month-olds). As in other work, point vowels were produced in more peripheral positions in IDS compared to ADS. However, there was little evidence of hyperarticulation per se (e.g. [i-ɪ] was hypoarticulated). We suggest that across-the-board lexically based hyperarticulation is not a necessary feature of IDS.

    Additional information

    CORRIGENDUM
  • Croijmans, I. (2016). Gelukkig kunnen we erover praten: Over de kunst om geuren en smaken in woorden te omschrijven. koffieTcacao, 17, 80-81.
  • Croijmans, I., Speed, L., Arshamian, A., & Majid, A. (2019). Measuring the multisensory imagery of wine: The Vividness of Wine Imagery Questionnaire. Multisensory Research, 32(3), 179-195. doi:10.1163/22134808-20191340.

    Abstract

    When we imagine objects or events, we often engage in multisensory mental imagery. Yet, investigations of mental imagery have typically focused on only one sensory modality — vision. One reason for this is that the most common tool for the measurement of imagery, the questionnaire, has been restricted to unimodal ratings of the object. We present a new mental imagery questionnaire that measures multisensory imagery. Specifically, the newly developed Vividness of Wine Imagery Questionnaire (VWIQ) measures mental imagery of wine in the visual, olfactory, and gustatory modalities. Wine is an ideal domain to explore multisensory imagery because wine drinking is a multisensory experience, it involves the neglected chemical senses (smell and taste), and provides the opportunity to explore the effect of experience and expertise on imagery (from wine novices to experts). The VWIQ questionnaire showed high internal consistency and reliability, and correlated with other validated measures of imagery. Overall, the VWIQ may serve as a useful tool to explore mental imagery for researchers, as well as individuals in the wine industry during sommelier training and evaluation of wine professionals.
  • Croijmans, I., & Majid, A. (2016). Not all flavor expertise is equal: The language of wine and coffee experts. PLoS One, 11(6): e0155845. doi:10.1371/journal.pone.0155845.

    Abstract

    People in Western cultures are poor at naming smells and flavors. However, for wine and
    coffee experts, describing smells and flavors is part of their daily routine. So are experts bet-
    ter than lay people at conveying smells and flavors in language? If smells and flavors are
    more easily linguistically expressed by experts, or more

    codable

    , then experts should be
    better than novices at describing smells and flavors. If experts are indeed better, we can
    also ask how general this advantage is: do experts show higher codability only for smells
    and flavors they are expert in (i.e., wine experts for wine and coffee experts for coffee) or is
    their linguistic dexterity more general? To address these questions, wine experts, coffee
    experts, and novices were asked to describe the smell and flavor of wines, coffees, every-
    day odors, and basic tastes. The resulting descriptions were compared on a number of
    measures. We found expertise endows a modest advantage in smell and flavor naming.
    Wine experts showed more consistency in how they described wine smells and flavors than
    coffee experts, and novices; but coffee experts were not more consistent for coffee descriptions. Neither expert group was any more accurate at identifying everyday smells or tastes. Interestingly, both wine and coffee experts tended to use more source-based terms (e.g., vanilla) in descriptions of their own area of expertise whereas novices tended to use more
    evaluative terms (e.g.,nice). However, the overall linguistic strategies for both groups were en par. To conclude, experts only have a limited, domain-specific advantage when communicating about smells and flavors. The ability to communicate about smells and flavors is a matter not only of perceptual training, but specific linguistic training too

    Additional information

    Data availability
  • Cronin, K. A., West, V., & Ross, S. R. (2016). Investigating the Relationship between Welfare and Rearing Young in Captive Chimpanzees (Pan troglodytes). Applied Animal Behaviour Science, 181, 166-172. doi:10.1016/j.applanim.2016.05.014.

    Abstract

    Whether the opportunity to breed and rear young improves the welfare of captive animals is currently debated. However, there is very little empirical data available to evaluate this relationship and this study is a first attempt to contribute objective data to this debate. We utilized the existing variation in the reproductive experiences of sanctuary chimpanzees at Chimfunshi Wildlife Orphanage Trust in Zambia to investigate whether breeding and rearing young was associated with improved welfare for adult females (N = 43). We considered several behavioural welfare indicators, including rates of luxury behaviours and abnormal or stress-related behaviours under normal conditions and conditions inducing social stress. Furthermore, we investigated whether spending time with young was associated with good or poor welfare for adult females, regardless of their kin relationship. We used generalized linear mixed models and found no difference between adult females with and without dependent young on any welfare indices, nor did we find that time spent in proximity to unrelated young predicted welfare (all full-null model comparisons likelihood ratio tests P > 0.05). However, we did find that coprophagy was more prevalent among mother-reared than non-mother-reared individuals, in line with recent work suggesting this behaviour may have a different etiology than other behaviours often considered to be abnormal. In sum, the findings from this initial study lend support to the hypothesis that the opportunity to breed and rear young does not provide a welfare benefit for chimpanzees in captivity. We hope this investigation provides a valuable starting point for empirical study into the welfare implications of managed breeding.

    Additional information

    mmc1.pdf
  • Cronin, K. A., Pieper, B., Van Leeuwen, E. J. C., Mundry, R., & Haun, D. B. M. (2014). Problem solving in the presence of others: How rank and relationship quality impact resource acquisition in chimpanzees (Pan troglodytes). PLoS One, 9(4): e93204. doi:10.1371/journal.pone.0093204.

    Abstract

    In the wild, chimpanzees (Pan troglodytes) are often faced with clumped food resources that they may know how to access but abstain from doing so due to social pressures. To better understand how social settings influence resource acquisition, we tested fifteen semi-wild chimpanzees from two social groups alone and in the presence of others. We investigated how resource acquisition was affected by relative social dominance, whether collaborative problem solving or (active or passive) sharing occurred amongst any of the dyads, and whether these outcomes were related to relationship quality as determined from six months of observational data. Results indicated that chimpanzees, regardless of rank, obtained fewer rewards when tested in the presence of others compared to when they were tested alone. Chimpanzees demonstrated behavioral inhibition; chimpanzees who showed proficient skill when alone often abstained from solving the task when in the presence of others. Finally, individuals with close social relationships spent more time together in the problem solving space, but collaboration and sharing were infrequent and sessions in which collaboration or sharing did occur contained more instances of aggression. Group living provides benefits and imposes costs, and these findings highlight that one cost of group living may be diminishing productive individual behaviors.
  • Cronin, K. A., Van Leeuwen, E. J. C., Vreeman, V., & Haun, D. B. M. (2014). Population-level variability in the social climates of four chimpanzee societies. Evolution and Human Behavior, 35(5), 389-396. doi:10.1016/j.evolhumbehav.2014.05.004.

    Abstract

    Recent debates have questioned the extent to which culturally-transmitted norms drive behavioral variation in resource sharing across human populations. We shed new light on this discussion by examining the group-level variation in the social dynamics and resource sharing of chimpanzees, a species that is highly social and forms long-term community associations but differs from humans in the extent to which cultural norms are adopted and enforced. We rely on theory developed in primate socioecology to guide our investigation in four neighboring chimpanzee groups at a sanctuary in Zambia. We used a combination of experimental and observational approaches to assess the distribution of resource holding potential in each group. In the first assessment, we measured the proportion of the population that gathered in a resource-rich zone, in the second we assessed naturally occurring social spacing via social network analysis, and in the third we assessed the degree to which benefits were equally distributed within the group. We report significant, stable group-level variation across these multiple measures, indicating that group-level variation in resource sharing and social tolerance is not necessarily reliant upon human-like cultural norms.
  • Cuskley, C., Dingemanse, M., Kirby, S., & Van Leeuwen, T. M. (2019). Cross-modal associations and synesthesia: Categorical perception and structure in vowel–color mappings in a large online sample. Behavior Research Methods, 51, 1651-1675. doi:10.3758/s13428-019-01203-7.

    Abstract

    We report associations between vowel sounds, graphemes, and colours collected online from over 1000 Dutch speakers. We provide open materials including a Python implementation of the structure measure, and code for a single page web application to run simple cross-modal tasks. We also provide a full dataset of colour-vowel associations from 1164 participants, including over 200 synaesthetes identified using consistency measures. Our analysis reveals salient patterns in cross-modal associations, and introduces a novel measure of isomorphism in cross-modal mappings. We find that while acoustic features of vowels significantly predict certain mappings (replicating prior work), both vowel phoneme category and grapheme category are even better predictors of colour choice. Phoneme category is the best predictor of colour choice overall, pointing to the importance of phonological representations in addition to acoustic cues. Generally, high/front vowels are lighter, more green, and more yellow than low/back vowels. Synaesthetes respond more strongly on some dimensions, choosing lighter and more yellow colours for high and mid front vowels than non-synaesthetes. We also present a novel measure of cross-modal mappings adapted from ecology, which uses a simulated distribution of mappings to measure the extent to which participants' actual mappings are structured isomorphically across modalities. Synaesthetes have mappings that tend to be more structured than non-synaesthetes, and more consistent colour choices across trials correlate with higher structure scores. Nevertheless, the large majority (~70%) of participants produce structured mappings, indicating that the capacity to make isomorphically structured mappings across distinct modalities is shared to a large extent, even if the exact nature of mappings varies across individuals. Overall, this novel structure measure suggests a distribution of structured cross-modal association in the population, with synaesthetes on one extreme and participants with unstructured associations on the other.
  • Cutler, A., & Norris, D. (2016). Bottoms up! How top-down pitfalls ensnare speech perception researchers too. Commentary on C. Firestone & B. Scholl: Cognition does not affect perception: Evaluating the evidence for 'top-down' effects. Behavioral and Brain Sciences, e236. doi:10.1017/S0140525X15002745.

    Abstract

    Not only can the pitfalls that Firestone & Scholl (F&S) identify be generalised across multiple studies within the field of visual perception, but also they have general application outside the field wherever perceptual and cognitive processing are compared. We call attention to the widespread susceptibility of research on the perception of speech to versions of the same pitfalls.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A. (2014). In thrall to the vocabulary. Acoustics Australia, 42, 84-89.

    Abstract

    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes; longer words inevitably tend to contain shorter ones. Recognising speech thus requires distinguishing intended words from accidentally present ones. Acoustic information in speech is used wherever it contributes significantly to this process; but as this review shows, its contribution differs across languages, with the consequences of this including: identical and equivalently present information distinguishing the same phonemes being used in Polish but not in German, or in English but not in Italian; identical stress cues being used in Dutch but not in English; expectations about likely embedding patterns differing across English, French, Japanese.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Dautriche, I., Cristia, A., Brusini, P., Yuan, S., Fisher, C., & Christophe, A. (2014). Toddlers default to canonical surface-to-meaning mapping when learning verbs. Child Development, 85(3), 1168-1180. doi:10.1111/cdev.12183.

    Abstract

    This work was supported by grants from the French Agence Nationale de la Recherche (ANR-2010-BLAN-1901) and from French Fondation de France to Anne Christophe, from the National Institute of Child Health and Human Development (HD054448) to Cynthia Fisher, Fondation Fyssen and Ecole de Neurosciences de Paris to Alex Cristia, and a PhD fellowship from the Direction Générale de l'Armement (DGA, France) supported by the PhD program FdV (Frontières du Vivant) to Isabelle Dautriche. We thank Isabelle Brunet for the recruitment, Michel Dutat for the technical support, and Hernan Anllo for his puppet mastery skill. We are grateful to the families that participated in this study. We also thank two anonymous reviewers for their comments on an earlier draft of this manuscript.
  • Dediu, D. (2016). A multi-layered problem. IEEE CDS Newsletter, 13, 14-15.

    Abstract

    A response to Moving Beyond Nature-Nurture: a Problem of Science or Communication? by John Spencer, Mark Blumberg and David Shenk
  • Dediu, D., & de Boer, B. (2016). Language evolution needs its own journal. Journal of Language Evolution, 1, 1-6. doi:10.1093/jole/lzv001.

    Abstract

    Interest in the origins and evolution of language has been around for as long as language has been around. However, only recently has the empirical study of language come of age. We argue that the field has sufficiently advanced that it now needs its own journal—the Journal of Language Evolution.
  • Dediu, D., & Christiansen, M. H. (2016). Language evolution: Constraints and opportunities from modern genetics. Topics in Cognitive Science, 8, 361-370. doi:10.1111/tops.12195.

    Abstract

    Our understanding of language, its origins and subsequent evolution (including language change) is shaped not only by data and theories from the language sciences, but also fundamentally by the biological sciences. Recent developments in genetics and evolutionary theory offer both very strong constraints on what scenarios of language evolution are possible and probable but also offer exciting opportunities for understanding otherwise puzzling phenomena. Due to the intrinsic breathtaking rate of advancement in these fields, the complexity, subtlety and sometimes apparent non-intuitiveness of the phenomena discovered, some of these recent developments have either being completely missed by language scientists, or misperceived and misrepresented. In this short paper, we offer an update on some of these findings and theoretical developments through a selection of illustrative examples and discussions that cast new light on current debates in the language sciences. The main message of our paper is that life is much more complex and nuanced than anybody could have predicted even a few decades ago, and that we need to be flexible in our theorizing instead of embracing a priori dogmas and trying to patch paradigms that are no longer satisfactory.
  • Dediu, D., & Moisik, S. R. (2019). Pushes and pulls from below: Anatomical variation, articulation and sound change. Glossa: A Journal of General Linguistics, 4(1): 7. doi:10.5334/gjgl.646.

    Abstract

    This paper argues that inter-individual and inter-group variation in language acquisition, perception, processing and production, rooted in our biology, may play a largely neglected role in sound change. We begin by discussing the patterning of these differences, highlighting those related to vocal tract anatomy with a foundation in genetics and development. We use our ArtiVarK database, a large multi-ethnic sample comprising 3D intraoral optical scans, as well as structural, static and real-time MRI scans of vocal tract anatomy and speech articulation, to quantify the articulatory strategies used to produce the North American English /r/ and to statistically show that anatomical factors seem to influence these articulatory strategies. Building on work showing that these alternative articulatory strategies may have indirect coarticulatory effects, we propose two models for how biases due to variation in vocal tract anatomy may affect sound change. The first involves direct overt acoustic effects of such biases that are then reinterpreted by the hearers, while the second is based on indirect coarticulatory phenomena generated by acoustically covert biases that produce overt “at-a-distance” acoustic effects. This view implies that speaker communities might be “poised” for change because they always contain pools of “standing variation” of such biased speakers, and when factors such as the frequency of the biased speakers in the community, their positions in the communicative network or the topology of the network itself change, sound change may rapidly follow as a self-reinforcing network-level phenomenon, akin to a phase transition. Thus, inter-speaker variation in structured and dynamic communicative networks may couple the initiation and actuation of sound change.
  • Dediu, D., Janssen, R., & Moisik, S. R. (2019). Weak biases emerging from vocal tract anatomy shape the repeated transmission of vowels. Nature Human Behaviour, 3, 1107-1115. doi:10.1038/s41562-019-0663-x.

    Abstract

    Linguistic diversity is affected by multiple factors, but it is usually assumed that variation in the anatomy of our speech organs
    plays no explanatory role. Here we use realistic computer models of the human speech organs to test whether inter-individual
    and inter-group variation in the shape of the hard palate (the bony roof of the mouth) affects acoustics of speech sounds. Based
    on 107 midsagittal MRI scans of the hard palate of human participants, we modelled with high accuracy the articulation of a set
    of five cross-linguistically representative vowels by agents learning to produce speech sounds. We found that different hard
    palate shapes result in subtle differences in the acoustics and articulatory strategies of the produced vowels, and that these
    individual-level speech idiosyncrasies are amplified by the repeated transmission of language across generations. Therefore,
    we suggest that, besides culture and environment, quantitative biological variation can be amplified, also influencing language.
  • Dediu, D. (2016). Typology for the masses. Linguistic typology, 20(3), 579-581. doi:10.1515/lingty-2016-0029.
  • Defina, R. (2016). Do serial verb constructions describe single events? A study of co-speech gestures in Avatime. Language, 92(4), 890-910. doi:10.1353/lan.2016.0076.

    Abstract

    Serial verb constructions have often been said to refer to single conceptual events. However, evidence to support this claim has been elusive. This article introduces co-speech gestures as a new way of investigating the relationship. The alignment patterns of gestures with serial verb constructions and other complex clauses were compared in Avatime (Ka-Togo, Kwa, Niger-Congo). Serial verb constructions tended to occur with single gestures overlapping the entire construction. In contrast, other complex clauses were more likely to be accompanied by distinct gestures overlapping individual verbs. This pattern of alignment suggests that serial verb constructions are in fact used to describe single events.

    Additional information

    https://doi.org/10.1353/lan.2016.0069
  • Defina, R. (2014). Arbil: Free tool for creating, editing and searching metadata. Language Documentation and Conservation, 8, 307-314.
  • Defina, R. (2016). Serial verb constructions and their subtypes in Avatime. Studies in Language, 40(3), 648-680. doi:10.1075/sl.40.3.07def.
  • Demontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O. and 61 moreDemontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., Baldursson, G., Belliveau, R., Bybjerg-Grauholm, J., Bækvad-Hansen, M., Cerrato, F., Chambert, K., Churchhouse, C., Dumont, A., Eriksson, N., Gandal, M., Goldstein, J. I., Grasby, K. L., Grove, J., Gudmundsson, O. O., Hansen, C. S., Hauberg, M. E., Hollegaard, M. V., Howrigan, D. P., Huang, H., Maller, J. B., Martin, A. R., Martin, N. G., Moran, J., Pallesen, J., Palmer, D. S., Pedersen, C. B., Pedersen, M. G., Poterba, T., Poulsen, J. B., Ripke, S., Robinson, E. B., Satterstrom, F. K., Stefansson, H., Stevens, C., Turley, P., Walters, G. B., Won, H., Wright, M. J., ADHD Working Group of the Psychiatric Genomics Consortium (PGC), EArly Genetics and Lifecourse Epidemiology (EAGLE) Consortium, 23andme Research Team, Andreassen, O. A., Asherson, P., Burton, C. L., Boomsma, D. I., Cormand, B., Dalsgaard, S., Franke, B., Gelernter, J., Geschwind, D., Hakonarson, H., Haavik, J., Kranzler, H. R., Kuntsi, J., Langley, K., Lesch, K.-P., Middeldorp, C., Reif, A., Rohde, L. A., Roussos, P., Schachar, R., Sklar, P., Sonuga-Barke, E. J. S., Sullivan, P. F., Thapar, A., Tung, J. Y., Waldman, I. D., Medland, S. E., Stefansson, K., Nordentoft, M., Hougaard, D. M., Werge, T., Mors, O., Mortensen, P. B., Daly, M. J., Faraone, S. V., Børglum, A. D., & Neale, B. (2019). Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nature Genetics, 51, 63-75. doi:10.1038/s41588-018-0269-7.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a highly heritable childhood behavioral disorder affecting 5% of children and 2.5% of adults. Common genetic variants contribute substantially to ADHD susceptibility, but no variants have been robustly associated with ADHD. We report a genome-wide association meta-analysis of 20,183 individuals diagnosed with ADHD and 35,191 controls that identifies variants surpassing genome-wide significance in 12 independent loci, finding important new information about the underlying biology of ADHD. Associations are enriched in evolutionarily constrained genomic regions and loss-of-function intolerant genes and around brain-expressed regulatory marks. Analyses of three replication studies: a cohort of individuals diagnosed with ADHD, a self-reported ADHD sample and a meta-analysis of quantitative measures of ADHD symptoms in the population, support these findings while highlighting study-specific differences on genetic overlap with educational attainment. Strong concordance with GWAS of quantitative population measures of ADHD symptoms supports that clinical diagnosis of ADHD is an extreme expression of continuous heritable traits.
  • Deriziotis, P., O'Roak, B. J., Graham, S. A., Estruch, S. B., Dimitropoulou, D., Bernier, R. A., Gerdts, J., Shendure, J., Eichler, E. E., & Fisher, S. E. (2014). De novo TBR1 mutations in sporadic autism disrupt protein functions. Nature Communications, 5: 4954. doi:10.1038/ncomms5954.

    Abstract

    Next-generation sequencing recently revealed that recurrent disruptive mutations in a few genes may account for 1% of sporadic autism cases. Coupling these novel genetic data to empirical assays of protein function can illuminate crucial molecular networks. Here we demonstrate the power of the approach, performing the first functional analyses of TBR1 variants identified in sporadic autism. De novo truncating and missense mutations disrupt multiple aspects of TBR1 function, including subcellular localization, interactions with co-regulators and transcriptional repression. Missense mutations inherited from unaffected parents did not disturb function in our assays. We show that TBR1 homodimerizes, that it interacts with FOXP2, a transcription factor implicated in speech/language disorders, and that this interaction is disrupted by pathogenic mutations affecting either protein. These findings support the hypothesis that de novo mutations in sporadic autism have severe functional consequences. Moreover, they uncover neurogenetic mechanisms that bridge different neurodevelopmental disorders involving language deficits.
  • Deriziotis, P., Graham, S. A., Estruch, S. B., & Fisher, S. E. (2014). Investigating protein-protein interactions in live cells using Bioluminescence Resonance Energy Transfer. Journal of visualized experiments, 87: e51438. doi:10.3791/51438.

    Abstract

    Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a ‘donor’ luciferase enzyme to an ‘acceptor’ fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.

    Additional information

    video
  • Devanna, P., & Vernes, S. C. (2014). A direct molecular link between the autism candidate gene RORa and the schizophrenia candidate MIR137. Scientific Reports, 4: 3994. doi:10.1038/srep03994.

    Abstract

    Retinoic acid-related orphan receptor alpha gene (RORa) and the microRNA MIR137 have both recently been identified as novel candidate genes for neuropsychiatric disorders. RORa encodes a ligand-dependent orphan nuclear receptor that acts as a transcriptional regulator and miR-137 is a brain enriched small non-coding RNA that interacts with gene transcripts to control protein levels. Given the mounting evidence for RORa in autism spectrum disorders (ASD) and MIR137 in schizophrenia and ASD, we investigated if there was a functional biological relationship between these two genes. Herein, we demonstrate that miR-137 targets the 3'UTR of RORa in a site specific manner. We also provide further support for MIR137 as an autism candidate by showing that a large number of previously implicated autism genes are also putatively targeted by miR-137. This work supports the role of MIR137 as an ASD candidate and demonstrates a direct biological link between these previously unrelated autism candidate genes
  • Devanna, P., Middelbeek, J., & Vernes, S. C. (2014). FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways. Frontiers in Cellular Neuroscience, 8: 305. doi:10.3389/fncel.2014.00305.

    Abstract

    FOXP2 was the first gene shown to cause a Mendelian form of speech and language disorder. Although developmentally expressed in many organs, loss of a single copy of FOXP2 leads to a phenotype that is largely restricted to orofacial impairment during articulation and linguistic processing deficits. Why perturbed FOXP2 function affects specific aspects of the developing brain remains elusive. We investigated the role of FOXP2 in neuronal differentiation and found that FOXP2 drives molecular changes consistent with neuronal differentiation in a human model system. We identified a network of FOXP2 regulated genes related to retinoic acid signaling and neuronal differentiation. FOXP2 also produced phenotypic changes associated with neuronal differentiation including increased neurite outgrowth and reduced migration. Crucially, cells expressing FOXP2 displayed increased sensitivity to retinoic acid exposure. This suggests a mechanism by which FOXP2 may be able to increase the cellular differentiation response to environmental retinoic acid cues for specific subsets of neurons in the brain. These data demonstrate that FOXP2 promotes neuronal differentiation by interacting with the retinoic acid signaling pathway and regulates key processes required for normal circuit formation such as neuronal migration and neurite outgrowth. In this way, FOXP2, which is found only in specific subpopulations of neurons in the brain, may drive precise neuronal differentiation patterns and/or control localization and connectivity of these FOXP2 positive cells
  • Dias, C., Estruch, S. B., Graham, S. A., McRae, J., Sawiak, S. J., Hurst, J. A., Joss, S. K., Holder, S. E., Morton, J. E., Turner, C., Thevenon, J., Mellul, K., Sánchez-Andrade, G., Ibarra-Soria, X., Derizioti, P., Santos, R. F., Lee, S.-C., Faivre, L., Kleefstra, T., Liu, P. and 3 moreDias, C., Estruch, S. B., Graham, S. A., McRae, J., Sawiak, S. J., Hurst, J. A., Joss, S. K., Holder, S. E., Morton, J. E., Turner, C., Thevenon, J., Mellul, K., Sánchez-Andrade, G., Ibarra-Soria, X., Derizioti, P., Santos, R. F., Lee, S.-C., Faivre, L., Kleefstra, T., Liu, P., Hurles, M. E., DDD Study, Fisher, S. E., & Logan, D. W. (2016). BCL11A haploinsufficiency causes an intellectual disability syndrome and dysregulates transcription. The American Journal of Human Genetics, 99(2), 253-274. doi:10.1016/j.ajhg.2016.05.030.

    Abstract

    Intellectual disability (ID) is a common condition with considerable genetic heterogeneity. Next-generation sequencing of large cohorts has identified an increasing number of genes implicated in ID, but their roles in neurodevelopment remain largely unexplored. Here we report an ID syndrome caused by de novo heterozygous missense, nonsense, and frameshift mutations in BCL11A, encoding a transcription factor that is a putative member of the BAF swi/snf chromatin-remodeling complex. Using a comprehensive integrated approach to ID disease modeling, involving human cellular analyses coupled to mouse behavioral, neuroanatomical, and molecular phenotyping, we provide multiple lines of functional evidence for phenotypic effects. The etiological missense variants cluster in the amino-terminal region of human BCL11A, and we demonstrate that they all disrupt its localization, dimerization, and transcriptional regulatory activity, consistent with a loss of function. We show that Bcl11a haploinsufficiency in mice causes impaired cognition, abnormal social behavior, and microcephaly in accordance with the human phenotype. Furthermore, we identify shared aberrant transcriptional profiles in the cortex and hippocampus of these mouse models. Thus, our work implicates BCL11A haploinsufficiency in neurodevelopmental disorders and defines additional targets regulated by this gene, with broad relevance for our understanding of ID and related syndromes
  • Diaz, B., Mitterer, H., Broersma, M., Escara, C., & Sebastián-Gallés, N. (2016). Variability in L2 phonemic learning originates from speech-specific capabilities: An MMN study on late bilinguals. Bilingualism: Language and Cognition, 19(5), 955-970. doi:10.1017/S1366728915000450.

    Abstract

    People differ in their ability to perceive second language (L2) sounds. In early bilinguals the variability in learning L2 phonemes stems from speech-specific capabilities (Díaz, Baus, Escera, Costa & Sebastián-Gallés, 2008). The present study addresses whether speech-specific capabilities similarly explain variability in late bilinguals. Event-related potentials were recorded (using a design similar to Díaz et al., 2008) in two groups of late Dutch–English bilinguals who were good or poor in overtly discriminating the L2 English vowels /ε-æ/. The mismatch negativity, an index of discrimination sensitivity, was similar between the groups in conditions involving pure tones (of different length, frequency, and presentation order) but was attenuated in poor L2 perceivers for native, unknown, and L2 phonemes. These results suggest that variability in L2 phonemic learning originates from speech-specific capabilities and imply a continuity of L2 phonemic learning mechanisms throughout the lifespan
  • Dima, A. L., & Dediu, D. (2016). Computation of Adherence to Medications and Visualization of Medication Histories in R with AdhereR: Towards Transparent and Reproducible Use of Electronic Healthcare Data. PLoS One, 12(4): e0174426. doi:10.1371/journal.pone.0174426.

    Abstract

    Adherence to medications is an important indicator of the quality of medication management and impacts on health outcomes and cost-effectiveness of healthcare delivery. Electronic healthcare data (EHD) are increasingly used to estimate adherence in research and clinical practice, yet standardization and transparency of data processing are still a concern. Comprehensive and flexible open-source algorithms can facilitate the development of high-quality, consistent, and reproducible evidence in this field. Some EHD-based clinical decision support systems (CDSS) include visualization of medication histories, but this is rarely integrated in adherence analyses and not easily accessible for data exploration or implementation in new clinical settings. We introduce AdhereR, a package for the widely used open-source statistical environment R, designed to support researchers in computing EHD-based adherence estimates and in visualizing individual medication histories and adherence patterns. AdhereR implements a set of functions that are consistent with current adherence guidelines, definitions and operationalizations. We illustrate the use of AdhereR with an example dataset of 2-year records of 100 patients and describe the various analysis choices possible and how they can be adapted to different health conditions and types of medications. The package is freely available for use and its implementation facilitates the integration of medication history visualizations in open-source CDSS platforms.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2016). Beat that word: How listeners integrate beat gesture and focus in multimodal speech discourse. Journal of Cognitive Neuroscience, 28(9), 1255-1269. doi:10.1162/jocn_a_00963.

    Abstract

    Communication is facilitated when listeners allocate their attention to important information (focus) in the message, a process called "information structure." Linguistic cues like the preceding context and pitch accent help listeners to identify focused information. In multimodal communication, relevant information can be emphasized by nonverbal cues like beat gestures, which represent rhythmic nonmeaningful hand movements. Recent studies have found that linguistic and nonverbal attention cues are integrated independently in single sentences. However, it is possible that these two cues interact when information is embedded in context, because context allows listeners to predict what information is important. In an ERP study, we tested this hypothesis and asked listeners to view videos capturing a dialogue. In the critical sentence, focused and nonfocused words were accompanied by beat gestures, grooming hand movements, or no gestures. ERP results showed that focused words are processed more attentively than nonfocused words as reflected in an N1 and P300 component. Hand movements also captured attention and elicited a P300 component. Importantly, beat gesture and focus interacted in a late time window of 600-900 msec relative to target word onset, giving rise to a late positivity when nonfocused words were accompanied by beat gestures. Our results show that listeners integrate beat gesture with the focus of the message and that integration costs arise when beat gesture falls on nonfocused information. This suggests that beat gestures fulfill a unique focusing function in multimodal discourse processing and that they have to be integrated with the information structure of the message.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M., & Enfield, N. J. (2014). Ongeschreven regels van de taal. Psyche en Brein, 6, 6-11.

    Abstract

    Als je wereldwijd gesprekken beluistert, merk je dat de menselijke dialoog universele regels volgt. Die sturen en verrijken onze sociale interactie.
  • Dingemanse, M., Kendrick, K. H., & Enfield, N. J. (2016). A Coding Scheme for Other-Initiated Repair across Languages. Open Linguistics, 2, 35-46. doi:10.1515/opli-2016-0002.

    Abstract

    We provide an annotated coding scheme for other-initiated repair, along with guidelines for building collections and aggregating cases based on interactionally relevant similarities and differences. The questions and categories of the scheme are grounded in inductive observations of conversational data and connected to a rich body of work on other-initiated repair in conversation analysis. The scheme is developed and tested in a 12-language comparative project and can serve as a stepping stone for future work on other-initiated repair and the systematic comparative study of conversational structures.
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2014). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. Studies in Language, 38, 5-43. doi:10.1075/sl.38.1.01din.

    Abstract

    In conversation, people have to deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them
  • Dingemanse, M. (2014). Making new ideophones in Siwu: Creative depiction in conversation. Pragmatics and Society, 5(3), 384-405. doi:10.1075/ps.5.3.04din.

    Abstract

    Ideophones are found in many of the world’s languages. Though they are a major word class on a par with nouns and verbs, their origins are ill-understood, and the question of ideophone creation has been a source of controversy. This paper studies ideophone creation in naturally occurring speech. New, unconventionalised ideophones are identified using native speaker judgements, and are studied in context to understand the rules and regularities underlying their production and interpretation. People produce and interpret new ideophones with the help of the semiotic infrastructure that underlies the use of existing ideophones: foregrounding frames certain stretches of speech as depictive enactments of sensory imagery, and various types of iconicity link forms and meanings. As with any creative use of linguistic resources, context and common ground also play an important role in supporting rapid ‘good enough’ interpretations of new material. The making of new ideophones is a special case of a more general phenomenon of creative depiction: the art of presenting verbal material in such a way that the interlocutor recognises and interprets it as a depiction.
  • Dingemanse, M., & Enfield, N. J. (2014). Let's talk: Universal social rules underlie languages. Scientific American Mind, 25, 64-69. doi:10.1038/scientificamericanmind0914-64.

    Abstract

    Recent developments in the science of language signal the emergence of a new paradigm for language study: a social approach to the fundamental questions of what language is like, how much languages really have in common, and why only our species has it. The key to these developments is a new appreciation of the need to study everyday spoken language, with all its complications and ‘imperfections’, in a systematic way. The work reviewed in this article —on turn-taking, timing, and other-initiated repair in languages around the world— has important implications for our understanding of human sociality and sheds new light on the social shape of language. For the first time in the history of linguistics, we are no longer tied to what can be written down or thought up. Rather, we look at language as a biologist would: as it occurs in nature.
  • Dingemanse, M., Schuerman, W. L., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117-e133. doi:10.1353/lan.2016.0034.

    Abstract

    Sound symbolism is a phenomenon with broad relevance to the study of language and mind, but there has been a disconnect between its investigations in linguistics and psychology. This study tests the sound-symbolic potential of ideophones—words described as iconic—in an experimental task that improves over prior work in terms of ecological validity and experimental control. We presented 203 ideophones from five languages to eighty-two Dutch listeners in a binary-choice task, in four versions: original recording, full diphone resynthesis, segments-only resynthesis, and prosody-only resynthesis. Listeners guessed the meaning of all four versions above chance, confirming the iconicity of ideophones and showing the viability of speech synthesis as a way of controlling for segmental and suprasegmental properties in experimental studies of sound symbolism. The success rate was more modest than prior studies using pseudowords like bouba/kiki, implying that assumptions based on such words cannot simply be transferred to natural languages. Prosody and segments together drive the effect: neither alone is sufficient, showing that segments and prosody work together as cues supporting iconic interpretations. The findings cast doubt on attempts to ascribe iconic meanings to segments alone and support a view of ideophones as words that combine arbitrariness and iconicity.We discuss the implications for theory and methods in the empirical study of sound symbolism and iconicity.

    Additional information

    https://muse.jhu.edu/article/619540
  • Djemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H. and 26 moreDjemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H., Koeleman, B. P., Leguern, E., Lehesjoki, A. E., Lemke, J. R., Leu, C., Marini, C., McMahon, J. M., Mei, D., Moller, R. S., Muhle, H., Myers, C. T., Nava, C., Serratosa, J. M., Sisodiya, S. M., Stephani, U., Striano, P., van Kempen, M. J., Verbeek, N. E., Usluer, S., Zara, F., Palotie, A., Mefford, H. C., Scheffer, I. E., De Jonghe, P., Helbig, I., & Suls, A. (2016). Pitfalls in genetic testing: the story of missed SCN1A mutations. Molecular Genetics & Genomic Medicine, 4(4), 457-64. doi:10.1002/mgg3.217.

    Abstract

    Background Sanger sequencing, still the standard technique for genetic testing in most diagnostic laboratories and until recently widely used in research, is gradually being complemented by next-generation sequencing (NGS). No single mutation detection technique is however perfect in identifying all mutations. Therefore, we wondered to what extent inconsistencies between Sanger sequencing and NGS affect the molecular diagnosis of patients. Since mutations in SCN1A, the major gene implicated in epilepsy, are found in the majority of Dravet syndrome (DS) patients, we focused on missed SCN1A mutations. Methods We sent out a survey to 16 genetic centers performing SCN1A testing. Results We collected data on 28 mutations initially missed using Sanger sequencing. All patients were falsely reported as SCN1A mutation-negative, both due to technical limitations and human errors. Conclusion We illustrate the pitfalls of Sanger sequencing and most importantly provide evidence that SCN1A mutations are an even more frequent cause of DS than already anticipated.
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.

    Abstract

    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.

    Additional information

    Supporting information
  • Drijvers, L., Mulder, K., & Ernestus, M. (2016). Alpha and gamma band oscillations index differential processing of acoustically reduced and full forms. Brain and Language, 153-154, 27-37. doi:10.1016/j.bandl.2016.01.003.

    Abstract

    Reduced forms like yeshay for yesterday often occur in conversations. Previous behavioral research reported a processing advantage for full over reduced forms. The present study investigated whether this processing advantage is reflected in a modulation of alpha (8–12 Hz) and gamma (30+ Hz) band activity. In three electrophysiological experiments, participants listened to full and reduced forms in isolation (Experiment 1), sentence-final position (Experiment 2), or mid-sentence position (Experiment 3). Alpha power was larger in response to reduced forms than to full forms, but only in Experiments 1 and 2. We interpret these increases in alpha power as reflections of higher auditory cognitive load. In all experiments, gamma power only increased in response to full forms, which we interpret as showing that lexical activation spreads more quickly through the semantic network for full than for reduced forms. These results confirm a processing advantage for full forms, especially in non-medial sentence position.
  • Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.

    Abstract

    Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.

    Additional information

    1-s2.0-S1053811919302216-mmc1.docx
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Lexically-guided perceptual learning in non-native listening. Bilingualism: Language and Cognition, 19(5), 914-920. doi:10.1017/S136672891600002X.

    Abstract

    There is ample evidence that native and non-native listeners use lexical knowledge to retune their native phonetic categories following ambiguous pronunciations. The present study investigates whether a non-native ambiguous sound can retune non-native phonetic categories. After a brief exposure to an ambiguous British English [l/ɹ] sound, Dutch listeners demonstrated retuning. This retuning was, however, asymmetrical: the non-native listeners seemed to show (more) retuning of the /ɹ/ category than of the /l/ category, suggesting that non-native listeners can retune non-native phonetic categories. This asymmetry is argued to be related to the large phonetic variability of /r/ in both Dutch and English.
  • Drude, S., Awete, W., & Aweti, A. (2019). A ortografia da língua Awetí. LIAMES: Línguas Indígenas Americanas, 19: e019014. doi:10.20396/liames.v19i0.8655746.

    Abstract

    Este trabalho descreve e fundamenta a ortografia da língua Awetí (Tupí, Alto Xingu/mt), com base na análise da estrutura fonológica e gramatical do Awetí. A ortografia é resultado de um longo trabalho colaborativo entre os três autores, iniciado em 1998. Ela não define apenas um alfabeto (a representação das vogais e das consoantes da língua), mas também aborda a variação interna, ressilabificação, lenição, palatalização e outros processos (morfo‑)fonológicos. Tanto a representação escrita da oclusiva glotal, quanto as consequências ortográficas da harmonia nasal receberam uma atenção especial. Apesar de o acento lexical não ser ortograficamente marcado em Awetí, a grande maioria dos afixos e partículas é abordada considerando o acento e sua interação com morfemas adjacentes, ao mesmo tempo determinando as palavras ortográficas. Finalmente foi estabelecida a ordem alfabética em que dígrafos são tratados como sequências de letras, já a oclusiva glotal ⟨ʼ⟩ é ignorada, facilitando o aprendizado do Awetí. A ortografia tal como descrita aqui tem sido usada por aproximadamente dez anos na escola para a alfabetização em Awetí, com bons resultados obtidos. Acreditamos que vários dos argumentos aqui levantados podem ser produtivamente transferidos para outras línguas com fenômenos semelhantes (a oclusiva glotal como consoante, harmonia nasal, assimilação morfo-fonológica, etc.).
  • Drude, S., Broeder, D., & Trilsbeek, P. (2014). The Language Archive and its solutions for sustainable endangered languages corpora. Book 2.0, 4, 5-20. doi:10.1386/btwo.4.1-2.5_1.

    Abstract

    Since the late 1990s, the technical group at the Max-Planck-Institute for Psycholinguistics has worked on solutions for important challenges in building sustainable data archives, in particular, how to guarantee long-time-availability of digital research data for future research. The support for the well-known DOBES (Documentation of Endangered Languages) programme has greatly inspired and advanced this work, and lead to the ongoing development of a whole suite of tools for annotating, cataloguing and archiving multi-media data. At the core of the LAT (Language Archiving Technology) tools is the IMDI metadata schema, now being integrated into a larger network of digital resources in the European CLARIN project. The multi-media annotator ELAN (with its web-based cousin ANNEX) is now well known not only among documentary linguists. We aim at presenting an overview of the solutions, both achieved and in development, for creating and exploiting sustainable digital data, in particular in the area of documenting languages and cultures, and their interfaces with related other developments
  • Dunn, M. (2014). [Review of the book Evolutionary Linguistics by April McMahon and Robert McMahon]. American Anthropologist, 116(3), 690-691.
  • Eaves, L. J., St Pourcain, B., Smith, G. D., York, T. P., & Evans, D. M. (2014). Resolving the Effects of Maternal and Offspring Genotype on Dyadic Outcomes in Genome Wide Complex Trait Analysis (“M-GCTA”). Behavior Genetics, 44(5), 445-455. doi:10.1007/s10519-014-9666-6.

    Abstract

    Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ~4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered.
  • Edmunds, R., L'Hours, H., Rickards, L., Trilsbeek, P., Vardigan, M., & Mokrane, M. (2016). Core trustworthy data repositories requirements. Zenodo, 168411. doi:10.5281/zenodo.168411.

    Abstract

    The Core Trustworthy Data Repository Requirements were developed by the DSA–WDS Partnership Working Group on Repository Audit and Certification, a Working Group (WG) of the Research Data Alliance . The goal of the effort was to create a set of harmonized common requirements for certification of repositories at the core level, drawing from criteria already put in place by the Data Seal of Approval (DSA: www.datasealofapproval.org) and the ICSU World Data System (ICSU-WDS: https://www.icsu-wds.org/services/certification). An additional goal of the project was to develop common procedures to be implemented by both DSA and ICSU-WDS. Ultimately, the DSA and ICSU-WDS plan to collaborate on a global framework for repository certification that moves from the core to the extended (nestor-Seal DIN 31644), to the formal (ISO 16363) level.
  • Eising, E., Carrion Castillo, A., Vino, A., Strand, E. A., Jakielski, K. J., Scerri, T. S., Hildebrand, M. S., Webster, R., Ma, A., Mazoyer, B., Francks, C., Bahlo, M., Scheffer, I. E., Morgan, A. T., Shriberg, L. D., & Fisher, S. E. (2019). A set of regulatory genes co-expressed in embryonic human brain is implicated in disrupted speech development. Molecular Psychiatry, 24, 1065-1078. doi:10.1038/s41380-018-0020-x.

    Abstract

    Genetic investigations of people with impaired development of spoken language provide windows into key aspects of human biology. Over 15 years after FOXP2 was identified, most speech and language impairments remain unexplained at the molecular level. We sequenced whole genomes of nineteen unrelated individuals diagnosed with childhood apraxia of speech, a rare disorder enriched for causative mutations of large effect. Where DNA was available from unaffected parents, we discovered de novo mutations, implicating genes, including CHD3, SETD1A and WDR5. In other probands, we identified novel loss-of-function variants affecting KAT6A, SETBP1, ZFHX4, TNRC6B and MKL2, regulatory genes with links to neurodevelopment. Several of the new candidates interact with each other or with known speech-related genes. Moreover, they show significant clustering within a single co-expression module of genes highly expressed during early human brain development. This study highlights gene regulatory pathways in the developing brain that may contribute to acquisition of proficient speech.

    Additional information

    Eising_etal_2018sup.pdf
  • Eising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K. and 9 moreEising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K., Palotie, A., Chasman, D. I., Ferrari, M. D., Terwindt, G. M., de Vries, B., Nyholt, D. R., Lelieveldt, B. P., van den Maagdenberg, A. M., & Reinders, M. J. (2016). Gene co‑expression analysis identifies brain regions and cell types involved in migraine pathophysiology: a GWAS‑based study using the Allen Human Brain Atlas. Human Genetics, 135(4), 425-439. doi:10.1007/s00439-016-1638-x.

    Abstract

    Migraine is a common disabling neurovascular brain disorder typically characterised by attacks of severe headache and associated with autonomic and neurological symptoms. Migraine is caused by an interplay of genetic and environmental factors. Genome-wide association studies (GWAS) have identified over a dozen genetic loci associated with migraine. Here, we integrated migraine GWAS data with high-resolution spatial gene expression data of normal adult brains from the Allen Human Brain Atlas to identify specific brain regions and molecular pathways that are possibly involved in migraine pathophysiology. To this end, we used two complementary methods. In GWAS data from 23,285 migraine cases and 95,425 controls, we first studied modules of co-expressed genes that were calculated based on human brain expression data for enrichment of genes that showed association with migraine. Enrichment of a migraine GWAS signal was found for five modules that suggest involvement in migraine pathophysiology of: (i) neurotransmission, protein catabolism and mitochondria in the cortex; (ii) transcription regulation in the cortex and cerebellum; and (iii) oligodendrocytes and mitochondria in subcortical areas. Second, we used the high-confidence genes from the migraine GWAS as a basis to construct local migraine-related co-expression gene networks. Signatures of all brain regions and pathways that were prominent in the first method also surfaced in the second method, thus providing support that these brain regions and pathways are indeed involved in migraine pathophysiology.
  • Eising, E., De Leeuw, C., Min, J. L., Anttila, V., Verheijen, M. H. G., Terwindt, G. M., Dichgans, M., Freilinger, T., Kubisch, C., Ferrari, M. D., Smit, A. B., De Vries, B., Palotie, A., Van Den Maagdenberg, A. M. J. M., & Posthuma, D. (2016). Involvement of astrocyte and oligodendrocyte gene sets in migraine. Cephalalgia, 36(7), 640-647. doi:10.1177/0333102415618614.

    Abstract

    Migraine is a common episodic brain disorder characterized by recurrent attacks of severe unilateral headache and additional neurological symptoms. Two main migraine types can be distinguished based on the presence of aura symptoms that can accompany the headache: migraine with aura and migraine without aura. Multiple genetic and environmental factors confer disease susceptibility. Recent genome-wide association studies (GWAS) indicate that migraine susceptibility genes are involved in various pathways, including neurotransmission, which have already been implicated in genetic studies of monogenic familial hemiplegic migraine, a subtype of migraine with aura. Methods To further explore the genetic background of migraine, we performed a gene set analysis of migraine GWAS data of 4954 clinic-based patients with migraine, as well as 13,390 controls. Curated sets of synaptic genes and sets of genes predominantly expressed in three glial cell types (astrocytes, microglia and oligodendrocytes) were investigated. Discussion Our results show that gene sets containing astrocyte- and oligodendrocyte-related genes are associated with migraine, which is especially true for gene sets involved in protein modification and signal transduction. Observed differences between migraine with aura and migraine without aura indicate that both migraine types, at least in part, seem to have a different genetic background.
  • Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.

    Abstract

    How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
  • Erard, M. (2016). Solving Australia's language puzzle. Science, 353(6306), 1357-1359. doi:10.1126/science.353.6306.1357.
  • Ernestus, M. (2014). Acoustic reduction and the roles of abstractions and exemplars in speech processing. Lingua, 142, 27-41. doi:10.1016/j.lingua.2012.12.006.

    Abstract

    Acoustic reduction refers to the frequent phenomenon in conversational speech that words are produced with fewer or lenited segments compared to their citation forms. The few published studies on the production and comprehension of acoustic reduction have important implications for the debate on the relevance of abstractions and exemplars in speech processing. This article discusses these implications. It first briefly introduces the key assumptions of simple abstractionist and simple exemplar-based models. It then discusses the literature on acoustic reduction and draws the conclusion that both types of models need to be extended to explain all findings. The ultimate model should allow for the storage of different pronunciation variants, but also reserve an important role for phonetic implementation. Furthermore, the recognition of a highly reduced pronunciation variant requires top down information and leads to activation of the corresponding unreduced variant, the variant that reaches listeners’ consciousness. These findings are best accounted for in hybrids models, assuming both abstract representations and exemplars. None of the hybrid models formulated so far can account for all data on reduced speech and we need further research for obtaining detailed insight into how speakers produce and listeners comprehend reduced speech.
  • Ernestus, M., Giezenaar, G., & Dikmans, M. (2016). Ikfstajezotuuknie: Half uitgesproken woorden in alledaagse gesprekken. Les, 199, 7-9.

    Abstract

    Amsterdam klinkt in informele gesprekken vaak als Amsdam en Rotterdam als Rodam, zonder dat de meeste moedertaalsprekers zich daar bewust van zijn. In alledaagse situaties valt een aanzienlijk deel van de klanken weg. Daarnaast worden veel klanken zwakker gearticuleerd (bijvoorbeeld een d als een j, als de mond niet helemaal afgesloten wordt). Het lijkt waarschijnlijk dat deze half uitgesproken woorden een probleem vormen voor tweedetaalleerders. Gereduceerde vormen kunnen immers sterk afwijken van de vormen die deze leerders geleerd hebben. Of dit werkelijk zo is, hebben de auteurs onderzocht in twee studies. Voordat ze deze twee studies bespreken, vertellen ze eerst kort iets over de verschillende typen reducties die voorkomen.
  • Estruch, S. B., Graham, S. A., Chinnappa, S. M., Deriziotis, P., & Fisher, S. E. (2016). Functional characterization of rare FOXP2 variants in neurodevelopmental disorder. Journal of Neurodevelopmental Disorders, 8: 44. doi:10.1186/s11689-016-9177-2.
  • Estruch, S. B., Graham, S. A., Deriziotis, P., & Fisher, S. E. (2016). The language-related transcription factor FOXP2 is post-translationally modified with small ubiquitin-like modifiers. Scientific Reports, 6: 20911. doi:10.1038/srep20911.

    Abstract

    Mutations affecting the transcription factor FOXP2 cause a rare form of severe speech and language disorder. Although it is clear that sufficient FOXP2 expression is crucial for normal brain development, little is known about how this transcription factor is regulated. To investigate post-translational mechanisms for FOXP2 regulation, we searched for protein interaction partners of FOXP2, and identified members of the PIAS family as novel FOXP2 interactors. PIAS proteins mediate post-translational modification of a range of target proteins with small ubiquitin-like modifiers (SUMOs). We found that FOXP2 can be modified with all three human SUMO proteins and that PIAS1 promotes this process. An aetiological FOXP2 mutation found in a family with speech and language disorder markedly reduced FOXP2 SUMOylation. We demonstrate that FOXP2 is SUMOylated at a single major site, which is conserved in all FOXP2 vertebrate orthologues and in the paralogues FOXP1 and FOXP4. Abolishing this site did not lead to detectable changes in FOXP2 subcellular localization, stability, dimerization or transcriptional repression in cellular assays, but the conservation of this site suggests a potential role for SUMOylation in regulating FOXP2 activity in vivo.

    Additional information

    srep20911-s1.pdf
  • Ho, Y. Y. W., Evans, D. M., Montgomery, G. W., Henders, A. K., Kemp, J. P., Timpson, N. J., St Pourcain, B., Heath, A. C., Madden, P. A. F., Loesch, D. Z., McNevin, D., Daniel, R., Davey-Smith, G., Martin, N. G., & Medland, S. E. (2016). Common genetic variants influence whorls in fingerprint patterns. Journal of Investigative Dermatology, 136(4), 859-862. doi:10.1016/j.jid.2015.10.062.
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Everaerd, D., Klumpers, F., Zwiers, M., Guadalupe, T., Franke, B., Van Oostrum, I., Schene, A., Fernandez, G., & Tendolkar, I. (2016). Childhood abuse and deprivation are associated with distinct sex-dependent differences in brain morphology. Neuropsychopharmacology, 41, 1716-1723. doi:10.1038/npp.2015.344.

    Abstract

    Childhood adversity (CA) has been associated with long-term structural brain alterations and an increased risk for psychiatric disorders. Evidence is emerging that subtypes of CA, varying in the dimensions of threat and deprivation, lead to distinct neural and behavioral outcomes. However, these specific associations have yet to be established without potential confounders such as psychopathology. Moreover, differences in neural development and psychopathology necessitate the exploration of sexual dimorphism. Young healthy adult subjects were selected based on history of CA from a large database to assess gray matter (GM) differences associated with specific subtypes of adversity. We compared voxel-based morphometry data of subjects reporting specific childhood exposure to abuse (n = 127) or deprivation (n = 126) and a similar sized group of controls (n = 129) without reported CA. Subjects were matched on age, gender, and educational level. Differences between CA subtypes were found in the fusiform gyrus and middle occipital gyms, where subjects with a history of deprivation showed reduced GM compared with subjects with a history of abuse. An interaction between sex and CA subtype was found. Women showed less GM in the visual posterior precuneal region after both subtypes of CA than controls. Men had less GM in the postcentral gyms after childhood deprivation compared with abuse. Our results suggest that even in a healthy population, CA subtypes are related to specific alterations in brain structure, which are modulated by sex. These findings may help understand neurodevelopmental consequences related to CA
  • Everett, C., Blasi, D. E., & Roberts, S. G. (2016). Language evolution and climate: The case of desiccation and tone. Journal of Language Evolution, 1, 33-46. doi:10.1093/jole/lzv004.

    Abstract

    We make the case that, contra standard assumption in linguistic theory, the sound systems of human languages are adapted to their environment. While not conclusive, this plausible case rests on several points discussed in this work: First, human behavior is generally adaptive and the assumption that this characteristic does not extend to linguistic structure is empirically unsubstantiated. Second, animal communication systems are well known to be adaptive within species across a variety of phyla and taxa. Third, research in laryngology demonstrates clearly that ambient desiccation impacts the performance of the human vocal cords. The latter point motivates a clear, testable hypothesis with respect to the synchronic global distribution of language types. Fourth, this hypothesis is supported in our own previous work, and here we discuss new approaches being developed to further explore the hypothesis. We conclude by suggesting that the time has come to more substantively examine the possibility that linguistic sound systems are adapted to their physical ecology
  • Everett, C., Blasi, D., & Roberts, S. G. (2016). Response: Climate and language: has the discourse shifted? Journal of Language Evolution, 1(1), 83-87. doi:10.1093/jole/lzv013.

    Abstract

    We begin by thanking the respondents for their thoughtful comments and insightful leads. The overall impression we are left with by this exchange is one of progress, even if no consensus remains about the particular hypothesis we raise. To date, there has been a failure to seriously engage with the possibility that humans might adapt their communication to ecological factors. In these exchanges, we see signs of serious engagement with that possibility. Most respondents expressed agreement with the notion that our central premise—that language is ecologically adaptive—requires further exploration and may in fact be operative. We are pleased to see this shift in discourse, and to witness a heightening appreciation of possible ecological constraints on language evolution. It is that shift in discourse that represents progress in our view. Our hope is that future work will continue to explore these issues, paying careful attention to the fact that the human larynx is clearly sensitive to characteristics of ambient air. More generally, we think this exchange is indicative of the growing realization that inquiries into language development must consider potential external factors (see Dediu 2015)...

    Additional information

    AppendixResponseToHammarstrom.pdf
  • Fan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y. and 17 moreFan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y., Ding, X., Wojciechowski, R., Young, T. L., Parssinen, O., Oexle, K., Pfeiffer, N., Bailey-Wilson, J. E., Paterson, A. D., Klaver, C. C. W., Plomin, R., Hammond, C. J., Mackey, D. A., He, M. G., Saw, S. M., Williams, C., Guggenheim, J. A., & Cream, C. (2016). Childhood gene-environment interactions and age-dependent effects of genetic variants associated with refractive error and myopia: The CREAM Consortium. Scientific Reports, 6: 25853. doi:10.1038/srep25853.

    Abstract

    Myopia, currently at epidemic levels in East Asia, is a leading cause of untreatable visual impairment. Genome-wide association studies (GWAS) in adults have identified 39 loci associated with refractive error and myopia. Here, the age-of-onset of association between genetic variants at these 39 loci and refractive error was investigated in 5200 children assessed longitudinally across ages 7-15 years, along with gene-environment interactions involving the major environmental risk-factors, nearwork and time outdoors. Specific variants could be categorized as showing evidence of: (a) early-onset effects remaining stable through childhood, (b) early-onset effects that progressed further with increasing age, or (c) onset later in childhood (N = 10, 5 and 11 variants, respectively). A genetic risk score (GRS) for all 39 variants explained 0.6% (P = 6.6E-08) and 2.3% (P = 6.9E-21) of the variance in refractive error at ages 7 and 15, respectively, supporting increased effects from these genetic variants at older ages. Replication in multi-ancestry samples (combined N = 5599) yielded evidence of childhood onset for 6 of 12 variants present in both Asians and Europeans. There was no indication that variant or GRS effects altered depending on time outdoors, however 5 variants showed nominal evidence of interactions with nearwork (top variant, rs7829127 in ZMAT4; P = 6.3E-04).

    Additional information

    srep25853-s1.pdf
  • Fan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C. and 83 moreFan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C., Miyake, M., Hewitt, A. W., Guo, X., Mazur, J., Huffman, J. E., Williams, K. M., Polasek, O., Campbell, H., Rudan, I., Vatavuk, Z., Wilson, J. F., Joshi, P. K., McMahon, G., St Pourcain, B., Evans, D. M., Simpson, C. L., Schwantes-An, T.-H., Igo, R. P., Mirshahi, A., Cougnard-Gregoire, A., Bellenguez, C., Blettner, M., Raitakari, O., Kähönen, M., Seppälä, I., Zeller, T., Meitinger, T., Ried, J. S., Gieger, C., Portas, L., Van Leeuwen, E. M., Amin, N., Uitterlinden, A. G., Rivadeneira, F., Hofman, A., Vingerling, J. R., Wang, Y. X., Wang, X., Boh, E.-T.-H., Ikram, M. K., Sabanayagam, C., Gupta, P., Tan, V., Zhou, L., Ho, C. E., Lim, W., Beuerman, R. W., Siantar, R., Tai, E.-S., Vithana, E., Mihailov, E., Khor, C.-C., Hayward, C., Luben, R. N., Foster, P. J., Klein, B. E., Klein, R., Wong, H.-S., Mitchell, P., Metspalu, A., Aung, T., Young, T. L., He, M., Pärssinen, O., Van Duijn, C. M., Wang, J. J., Williams, C., Jonas, J. B., Teo, Y.-Y., Mackey, D. A., Oexle, K., Yoshimura, N., Paterson, A. D., Pfeiffer, N., Wong, T.-Y., Baird, P. N., Stambolian, D., Bailey-Wilson, J. E., Cheng, C.-Y., Hammond, C. J., Klaver, C. C., Saw, S.-M., & Consortium for Refractive Error and Myopia (CREAM) (2016). Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error. Nature Communications, 7: 11008. doi:10.1038/ncomms11008.

    Abstract

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia

    Additional information

    Fan_etal_2016sup.pdf
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.

    Abstract

    The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition.
  • Fedorenko, E., Morgan, A., Murray, E., Cardinaux, A., Mei, C., Tager-Flusberg, H., Fisher, S. E., & Kanwisher, N. (2016). A highly penetrant form of childhood apraxia of speech due to deletion of 16p11.2. European Journal of Human Genetics, 24(2), 302-306. doi:10.1038/ejhg.2015.149.

    Abstract

    Individuals with heterozygous 16p11.2 deletions reportedly suffer from a variety of difficulties with speech and language. Indeed, recent copy-number variant screens of children with childhood apraxia of speech (CAS), a specific and rare motor speech disorder, have identified three unrelated individuals with 16p11.2 deletions. However, the nature and prevalence of speech and language disorders in general, and CAS in particular, is unknown for individuals with 16p11.2 deletions. Here we took a genotype-first approach, conducting detailed and systematic characterization of speech abilities in a group of 11 unrelated children ascertained on the basis of 16p11.2 deletions. To obtain the most precise and replicable phenotyping, we included tasks that are highly diagnostic for CAS, and we tested children under the age of 18 years, an age group where CAS has been best characterized. Two individuals were largely nonverbal, preventing detailed speech analysis, whereas the remaining nine met the standard accepted diagnostic criteria for CAS. These results link 16p11.2 deletions to a highly penetrant form of CAS. Our findings underline the need for further precise characterization of speech and language profiles in larger groups of affected individuals, which will also enhance our understanding of how genetic pathways contribute to human communication disorders.
  • Felker, E. R., Klockmann, H. E., & De Jong, N. H. (2019). How conceptualizing influences fluency in first and second language speech production. Applied Psycholinguistics, 40(1), 111-136. doi:10.1017/S0142716418000474.

    Abstract

    When speaking in any language, speakers must conceptualize what they want to say before they can formulate and articulate their message. We present two experiments employing a novel experimental paradigm in which the formulating and articulating stages of speech production were kept identical across conditions of differing conceptualizing difficulty. We tracked the effect of difficulty in conceptualizing during the generation of speech (Experiment 1) and during the abandonment and regeneration of speech (Experiment 2) on speaking fluency by Dutch native speakers in their first (L1) and second (L2) language (English). The results showed that abandoning and especially regenerating a speech plan taxes the speaker, leading to disfluencies. For most fluency measures, the increases in disfluency were similar across L1 and L2. However, a significant interaction revealed that abandoning and regenerating a speech plan increases the time needed to solve conceptual difficulties while speaking in the L2 to a greater degree than in the L1. This finding supports theories in which cognitive resources for conceptualizing are shared with those used for later stages of speech planning. Furthermore, a practical implication for language assessment is that increasing the conceptual difficulty of speaking tasks should be considered with caution.
  • Ferreri, L., & Verga, L. (2016). Benefits of music on verbal learning and memory: How and when does it work? Music Perception, 34(2), 167-182. doi:10.1525/mp.2016.34.2.167.

    Abstract

    A long-standing debate in cognitive neurosciences concerns the effect of music on verbal learning and memory. Research in this field has largely provided conflicting results in both clinical as well as non-clinical populations. Although several studies have shown a positive effect of music on the encoding and retrieval of verbal stimuli, music has also been suggested to hinder mnemonic performance by dividing attention. In an attempt to explain this conflict, we review the most relevant literature on the effects of music on verbal learning and memory. Furthermore, we specify several mechanisms through which music may modulate these cognitive functions. We suggest that the extent to which music boosts these cognitive functions relies on experimental factors, such as the relative complexity of musical and verbal stimuli employed. These factors should be carefully considered in further studies, in order to reliably establish how and when music boosts verbal memory and learning. The answers to these questions are not only crucial for our knowledge of how music influences cognitive and brain functions, but may have important clinical implications. Considering the increasing number of approaches using music as a therapeutic tool, the importance of understanding exactly how music works can no longer be underestimated.
  • Fields, E. C., Weber, K., Stillerman, B., Delaney-Busch, N., & Kuperberg, G. (2019). Functional MRI reveals evidence of a self-positivity bias in the medial prefrontal cortex during the comprehension of social vignettes. Social Cognitive and Affective Neuroscience, 14(6), 613-621. doi:10.1093/scan/nsz035.

    Abstract

    A large literature in social neuroscience has associated the medial prefrontal cortex (mPFC) with the processing of self-related information. However, only recently have social neuroscience studies begun to consider the large behavioral literature showing a strong self-positivity bias, and these studies have mostly focused on its correlates during self-related judgments and decision making. We carried out a functional MRI (fMRI) study to ask whether the mPFC would show effects of the self-positivity bias in a paradigm that probed participants’ self-concept without any requirement of explicit self-judgment. We presented social vignettes that were either self-relevant or non-self-relevant with a neutral, positive, or negative outcome described in the second sentence. In previous work using event-related potentials, this paradigm has shown evidence of a self-positivity bias that influences early stages of semantically processing incoming stimuli. In the present fMRI study, we found evidence for this bias within the mPFC: an interaction between self-relevance and valence, with only positive scenarios showing a self vs other effect within the mPFC. We suggest that the mPFC may play a role in maintaining a positively-biased self-concept and discuss the implications of these findings for the social neuroscience of the self and the role of the mPFC.

    Additional information

    Supplementary data
  • Filippi, P. (2016). Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Frontiers in Psychology, 7: 1393. doi:10.3389/fpsyg.2016.01393.

    Abstract

    Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody – and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
  • Filippi, P., Jadoul, Y., Ravignani, A., Thompson, B., & de Boer, B. (2016). Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages. Frontiers in Human Neuroscience, 10: 586. doi:10.3389/fnhum.2016.00586.

    Abstract

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). Pitch enhancement facilitates word learning across visual contexts. Frontiers in Psychology, 5: 1468. doi:10.3389%2Ffpsyg.2014.01468.

    Abstract

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
  • Fisher, S. E., & Tilot, A. K. (2019). Bridging senses: Novel insights from synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190022. doi:10.1098/rstb.2019.0022.
  • Fisher, S. E., & Tilot, A. K. (Eds.). (2019). Bridging senses: Novel insights from synaesthesia [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374.
  • Fisher, S. E. (2019). Human genetics: The evolving story of FOXP2. Current Biology, 29(2), R65-R67. doi:10.1016/j.cub.2018.11.047.

    Abstract

    FOXP2 mutations cause a speech and language disorder, raising interest in potential roles of this gene in human evolution. A new study re-evaluates genomic variation at the human FOXP2 locus but finds no evidence of recent adaptive evolution.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fitz, H., & Chang, F. (2019). Language ERPs reflect learning through prediction error propagation. Cognitive Psychology, 111, 15-52. doi:10.1016/j.cogpsych.2019.03.002.

    Abstract

    Event-related potentials (ERPs) provide a window into how the brain is processing language. Here, we propose a theory that argues that ERPs such as the N400 and P600 arise as side effects of an error-based learning mechanism that explains linguistic adaptation and language learning. We instantiated this theory in a connectionist model that can simulate data from three studies on the N400 (amplitude modulation by expectancy, contextual constraint, and sentence position), five studies on the P600 (agreement, tense, word category, subcategorization and garden-path sentences), and a study on the semantic P600 in role reversal anomalies. Since ERPs are learning signals, this account explains adaptation of ERP amplitude to within-experiment frequency manipulations and the way ERP effects are shaped by word predictability in earlier sentences. Moreover, it predicts that ERPs can change over language development. The model provides an account of the sensitivity of ERPs to expectation mismatch, the relative timing of the N400 and P600, the semantic nature of the N400, the syntactic nature of the P600, and the fact that ERPs can change with experience. This approach suggests that comprehension ERPs are related to sentence production and language acquisition mechanisms
  • FitzPatrick, I., & Indefrey, P. (2016). Accessing Conceptual Representations for Speaking [Editorial]. Frontiers in Psychology, 7: 1216. doi:10.3389/fpsyg.2016.01216.

    Abstract

    Systematic investigations into the role of semantics in the speech production process have remained elusive. This special issue aims at moving forward toward a more detailed account of how precisely conceptual information is used to access the lexicon in speaking and what corresponding format of conceptual representations needs to be assumed. The studies presented in this volume investigated effects of conceptual processing on different processing stages of language production, including sentence formulation, lemma selection, and word form access.
  • FitzPatrick, I., & Indefrey, P. (2014). Head start for target language in bilingual listening. Brain Research, 1542, 111-130. doi:10.1016/j.brainres.2013.10.014.

    Abstract

    In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited a Late Positive Component (LPC). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2014). Grammatical aspect influences motion event perception: Evidence from a cross-linguistic non-verbal recognition task. Language and Cognition, 6(1), 45-78. doi:10.1017/langcog.2013.2.

    Abstract

    Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
  • Floyd, S. (2014). [Review of the book Flexible word classes: Typological studies of underspecified parts of speech ed. by Jan Rijkhoff and Eva van Lier]. Linguistics, 52, 1499-1502. doi:10.1515/ling-2014-0027.
  • Floyd, S. (2016). [Review of the book Fluent Selves: Autobiography, Person, and History in Lowland South America ed. by Suzanne Oakdale and Magnus Course]. Journal of Linguistic Anthropology, 26(1), 110-111. doi:10.1111/jola.12112.
  • Floyd, S. (2016). Modally hybrid grammar? Celestial pointing for time-of-day reference in Nheengatú. Language, 92(1), 31-64. doi:10.1353/lan.2016.0013.

    Abstract

    From the study of sign languages we know that the visual modality robustly supports the encoding of conventionalized linguistic elements, yet while the same possibility exists for the visual bodily behavior of speakers of spoken languages, such practices are often referred to as ‘gestural’ and are not usually described in linguistic terms. This article describes a practice of speakers of the Brazilian indigenous language Nheengatú of pointing to positions along the east-west axis of the sun’s arc for time-of-day reference, and illustrates how it satisfies any of the common criteria for linguistic elements, as a system of standardized and productive form-meaning pairings whose contributions to propositional meaning remain stable across contexts. First, examples from a video corpus of natural speech demonstrate these conventionalized properties of Nheengatú time reference across multiple speakers. Second, a series of video-based elicitation stimuli test several dimensions of its conventionalization for nine participants. The results illustrate why modality is not an a priori reason that linguistic properties cannot develop in the visual practices that accompany spoken language. The conclusion discusses different possible morphosyntactic and pragmatic analyses for such conventionalized visual elements and asks whether they might be more crosslinguistically common than we presently know.
  • Floyd, S., Manrique, E., Rossi, G., & Torreira, F. (2016). Timing of visual bodily behavior in repair sequences: Evidence from three languages. Discourse Processes, 53(3), 175-204. doi:10.1080/0163853X.2014.992680.

    Abstract

    This article expands the study of other-initiated repair in conversation—when one party
    signals a problemwith producing or perceiving another’s turn at talk—into the domain
    of visual bodily behavior. It presents one primary cross-linguistic finding about the
    timing of visual bodily behavior in repair sequences: if the party who initiates repair
    accompanies their turn with a “hold”—when relatively dynamic movements are
    temporarily andmeaningfully held static—this positionwill not be disengaged until the
    problem is resolved and the sequence closed. We base this finding on qualitative and
    quantitative analysis of corpora of conversational interaction from three unrelated languages representing two different modalities: Northern Italian, the Cha’palaa language of Ecuador, and Argentine Sign Language. The cross-linguistic similarities
    uncovered by this comparison suggest that visual bodily practices have been
    semiotized for similar interactive functions across different languages and modalities
    due to common pressures in face-to-face interaction.
  • Folia, V., & Petersson, K. M. (2014). Implicit structured sequence learning: An fMRI study of the structural mere-exposure effect. Frontiers in Psychology, 5: 41. doi:10.3389/fpsyg.2014.00041.

    Abstract

    In this event-related FMRI study we investigated the effect of five days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the FMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference FMRI baseline measurement allowed us to conclude that these FMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs.
  • Forkel, S. J., Thiebaut de Schotten, M., Dell’Acqua, F., Kalra, L., Murphy, D. G. M., Williams, S. C. R., & Catani, M. (2014). Anatomical predictors of aphasia recovery: a tractography study of bilateral perisylvian language networks. Brain, 137, 2027-2039. doi:10.1093/brain/awu113.

    Abstract

    Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. For patients and clinicians the possibility of relying on valid predictors of recovery is an important asset in the clinical management of stroke-related impairment. Age, level of education, type and severity of initial symptoms are established predictors of recovery. However, anatomical predictors are still poorly understood. In this prospective longitudinal study, we intended to assess anatomical predictors of recovery derived from diffusion tractography of the perisylvian language networks. Our study focused on the arcuate fasciculus, a language pathway composed of three segments connecting Wernicke’s to Broca’s region (i.e. long segment), Wernicke’s to Geschwind’s region (i.e. posterior segment) and Broca’s to Geschwind’s region (i.e. anterior segment). In our study we were particularly interested in understanding how lateralization of the arcuate fasciculus impacts on severity of symptoms and their recovery. Sixteen patients (10 males; mean age 60 ± 17 years, range 28–87 years) underwent post stroke language assessment with the Revised Western Aphasia Battery and neuroimaging scanning within a fortnight from symptoms onset. Language assessment was repeated at 6 months. Backward elimination analysis identified a subset of predictor variables (age, sex, lesion size) to be introduced to further regression analyses. A hierarchical regression was conducted with the longitudinal aphasia severity as the dependent variable. The first model included the subset of variables as previously defined. The second model additionally introduced the left and right arcuate fasciculus (separate analysis for each segment). Lesion size was identified as the only independent predictor of longitudinal aphasia severity in the left hemisphere [beta = −0.630, t(−3.129), P = 0.011]. For the right hemisphere, age [beta = −0.678, t(–3.087), P = 0.010] and volume of the long segment of the arcuate fasciculus [beta = 0.730, t(2.732), P = 0.020] were predictors of longitudinal aphasia severity. Adding the volume of the right long segment to the first-level model increased the overall predictive power of the model from 28% to 57% [F(1,11) = 7.46, P = 0.02]. These findings suggest that different predictors of recovery are at play in the left and right hemisphere. The right hemisphere language network seems to be important in aphasia recovery after left hemispheric stroke.

    Additional information

    supplementary information

Share this page