Publications

Displaying 301 - 400 of 480
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Converging evidence from electrocorticography and BOLD fMRI for a sharp functional boundary in superior temporal gyrus related to multisensory speech processing. Frontiers in Human Neuroscience, 12: 141. doi:10.3389/fnhum.2018.00141.

    Abstract

    Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
  • Ozker, M., Yoshor, D., & Beauchamp, M. (2018). Frontal cortex selects representations of the talker’s mouth to aid in speech perception. eLife, 7: e30387. doi:10.7554/eLife.30387.
  • Palva, J. M., Wang, S. H., Palva, S., Zhigalov, A., Monto, S., Brookes, M. J., & Schoffelen, J.-M. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage, 173, 632-643. doi:10.1016/j.neuroimage.2018.02.032.

    Abstract

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study
    long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is
    nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear
    correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear
    source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based
    connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed.
    Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular
    in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here,
    however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large
    numbers of spurious false positive connections through field spread in the vicinity of true interactions. This
    fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most
    importantly, beyond defining and illustrating the problem of spurious, or “ghost” interactions, we provide a
    rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal
    mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that
    spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when
    using measures that are immune to zero-lag correlations.
  • Pascucci, D., Hervais-Adelman, A., & Plomp, G. (2018). Gating by induced A-Gamma asynchrony in selective attention. Human Brain Mapping, 39(10), 3854-3870. doi:10.1002/hbm.24216.

    Abstract

    Visual selective attention operates through top–down mechanisms of signal enhancement and suppression, mediated by a-band oscillations. The effects of such top–down signals on local processing in primary visual cortex (V1) remain poorly understood. In this work, we characterize the interplay between large-s cale interactions and local activity changes in V1 that orchestrat es selective attention, using Granger-causality and phase-amplitude coupling (PAC) analysis of EEG source signals. The task required participants to either attend to or ignore oriented gratings. Results from time-varying, directed connectivity analysis revealed frequency-specific effects of attentional selection: bottom–up g-band influences from visual areas increased rapidly in response to attended stimuli while distributed top–down a-band influences originated from parietal cortex in response to ignored stimuli. Importantly, the results revealed a critical interplay between top–down parietal signals and a–g PAC in visual areas.
    Parietal a-band influences disrupted the a–g coupling in visual cortex, which in turn reduced the amount of g-band outflow from visual area s. Our results are a first demon stration of how directed interactions affect cross-frequency coupling in downstream areas depending on task demands. These findings suggest that parietal cortex realizes selective attention by disrupting cross-frequency coupling at target regions, which prevents them from propagating task-irrelevant information.
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Peeters, D. (2018). A standardized set of 3D-objects for virtual reality research and applications. Behavior Research Methods, 50(3), 1047-1054. doi:10.3758/s13428-017-0925-3.

    Abstract

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theory in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3D-objects for virtual reality research is important, as reaching valid theoretical conclusions critically hinges on the use of well controlled experimental stimuli. Sharing standardized 3D-objects across different virtual reality labs will allow for science to move forward more quickly.
  • Peeters, D., & Dijkstra, T. (2018). Sustained inhibition of the native language in bilingual language production: A virtual reality approach. Bilingualism: Language and Cognition, 21(5), 1035-1061. doi:10.1017/S1366728917000396.

    Abstract

    Bilinguals often switch languages as a function of the language background of their addressee. The control mechanisms supporting bilinguals' ability to select the contextually appropriate language are heavily debated. Here we present four experiments in which unbalanced bilinguals named pictures in their first language Dutch and their second language English in mixed and blocked contexts. Immersive virtual reality technology was used to increase the ecological validity of the cued language-switching paradigm. Behaviorally, we consistently observed symmetrical switch costs, reversed language dominance, and asymmetrical mixing costs. These findings indicate that unbalanced bilinguals apply sustained inhibition to their dominant L1 in mixed language settings. Consequent enhanced processing costs for the L1 in a mixed versus a blocked context were reflected by a sustained positive component in event-related potentials. Methodologically, the use of virtual reality opens up a wide range of possibilities to study language and communication in bilingual and other communicative settings.
  • Perlman, M., Little, H., Thompson, B., & Thompson, R. L. (2018). Iconicity in signed and spoken vocabulary: A comparison between American Sign Language, British Sign Language, English, and Spanish. Frontiers in Psychology, 9: 1433. doi:10.3389/fpsyg.2018.01433.

    Abstract

    Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages – American Sign Language and British Sign Language, and two spoken languages – English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages
  • Perry, L. K., Perlman, M., Winter, B., Massaro, D. W., & Lupyan, G. (2018). Iconicity in the speech of children and adults. Developmental Science, 21: e12572. doi:10.1111/desc.12572.

    Abstract

    Iconicity – the correspondence between form and meaning – may help young children learn to use new words. Early-learned words are higher in iconicity than later learned words. However, it remains unclear what role iconicity may play in actual language use. Here, we ask whether iconicity relates not just to the age at which words are acquired, but also to how frequently children and adults use the words in their speech. If iconicity serves to bootstrap word learning, then we would expect that children should say highly iconic words more frequently than less iconic words, especially early in development. We would also expect adults to use iconic words more often when speaking to children than to other adults. We examined the relationship between frequency and iconicity for approximately 2000 English words. Replicating previous findings, we found that more iconic words are learned earlier. Moreover, we found that more iconic words tend to be used more by younger children, and adults use more iconic words when speaking to children than to other adults. Together, our results show that young children not only learn words rated high in iconicity earlier than words low in iconicity, but they also produce these words more frequently in conversation – a pattern that is reciprocated by adults when speaking with children. Thus, the earliest conversations of children are relatively higher in iconicity, suggesting that this iconicity scaffolds the production and comprehension of spoken language during early development.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Piai, V., Rommers, J., & Knight, R. T. (2018). Lesion evidence for a critical role of left posterior but not frontal areas in alpha–beta power decreases during context-driven word production. European Journal of Neuroscience, 48(7), 2622-2629. doi:10.1111/ejn.13695.

    Abstract

    Different frequency bands in the electroencephalogram are postulated to support distinct language functions. Studies have suggested
    that alpha–beta power decreases may index word-retrieval processes. In context-driven word retrieval, participants hear
    lead-in sentences that either constrain the final word (‘He locked the door with the’) or not (‘She walked in here with the’). The last
    word is shown as a picture to be named. Previous studies have consistently found alpha–beta power decreases prior to picture
    onset for constrained relative to unconstrained sentences, localised to the left lateral-temporal and lateral-frontal lobes. However,
    the relative contribution of temporal versus frontal areas to alpha–beta power decreases is unknown. We recorded the electroencephalogram
    from patients with stroke lesions encompassing the left lateral-temporal and inferior-parietal regions or left-lateral
    frontal lobe and from matched controls. Individual participant analyses revealed a behavioural sentence context facilitation effect
    in all participants, except for in the two patients with extensive lesions to temporal and inferior parietal lobes. We replicated the
    alpha–beta power decreases prior to picture onset in all participants, except for in the two same patients with extensive posterior
    lesions. Thus, whereas posterior lesions eliminated the behavioural and oscillatory context effect, frontal lesions did not. Hierarchical
    clustering analyses of all patients’ lesion profiles, and behavioural and electrophysiological effects identified those two
    patients as having a unique combination of lesion distribution and context effects. These results indicate a critical role for the left
    lateral-temporal and inferior parietal lobes, but not frontal cortex, in generating the alpha–beta power decreases underlying context-
    driven word production.
  • Pika, S., Wilkinson, R., Kendrick, K. H., & Vernes, S. C. (2018). Taking turns: Bridging the gap between human and animal communication. Proceedings of the Royal Society B: Biological Sciences, 285(1880): 20180598. doi:10.1098/rspb.2018.0598.

    Abstract

    Language, humans’ most distinctive trait, still remains a ‘mystery’ for evolutionary theory. It is underpinned by a universal infrastructure—cooperative turn-taking—which has been suggested as an ancient mechanism bridging the existing gap between the articulate human species and their inarticulate primate cousins. However, we know remarkably little about turn-taking systems of non-human animals, and methodological confounds have often prevented meaningful cross-species comparisons. Thus, the extent to which cooperative turn-taking is uniquely human or represents a homologous and/or analogous trait is currently unknown. The present paper draws attention to this promising research avenue by providing an overview of the state of the art of turn-taking in four animal taxa—birds, mammals, insects and anurans. It concludes with a new comparative framework to spur more research into this research domain and to test which elements of the human turn-taking system are shared across species and taxa.
  • Poletiek, F. H., Conway, C. M., Ellefson, M. R., Lai, J., Bocanegra, B. R., & Christiansen, M. H. (2018). Under what conditions can recursion be learned? Effects of starting small in artificial grammar learning of recursive structure. Cognitive Science, 42(8), 2855-2889. doi:10.1111/cogs.12685.

    Abstract

    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, 1993; Newport, 1990). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right‐branching and center‐embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (N = 100). In Experiment 3 and 4, we used a more complex center‐embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowledge of simple units to more complex units when the training input “grew” according to structural complexity, compared to when it “grew” according to string length. Overall, the results suggest that starting small confers an advantage for learning complex center‐embedded structures when the input is organized according to structural complexity.
  • Popov, T., Jensen, O., & Schoffelen, J.-M. (2018). Dorsal and ventral cortices are coupled by cross-frequency interactions during working memory. NeuroImage, 178, 277-286. doi:10.1016/j.neuroimage.2018.05.054.

    Abstract

    Oscillatory activity in the alpha and gamma bands is considered key in shaping functional brain architecture. Power
    increases in the high-frequency gamma band are typically reported in parallel to decreases in the low-frequency alpha
    band. However, their functional significance and in particular their interactions are not well understood. The present
    study shows that, in the context of an N-backworking memory task, alpha power decreases in the dorsal visual stream
    are related to gamma power increases in early visual areas. Granger causality analysis revealed directed interregional
    interactions from dorsal to ventral stream areas, in accordance with task demands. Present results reveal a robust,
    behaviorally relevant, and architectonically decisive power-to-power relationship between alpha and gamma activity.
    This relationship suggests that anatomically distant power fluctuations in oscillatory activity can link cerebral network
    dynamics on trial-by-trial basis during cognitive operations such as working memory
  • Popov, T., Oostenveld, R., & Schoffelen, J.-M. (2018). FieldTrip made easy: An analysis protocol for group analysis of the auditory steady state brain response in time, frequency, and space. Frontiers in Neuroscience, 12: 711. doi:10.3389/fnins.2018.00711.

    Abstract

    The auditory steady state evoked response (ASSR) is a robust and frequently utilized
    phenomenon in psychophysiological research. It reflects the auditory cortical response
    to an amplitude-modulated constant carrier frequency signal. The present report
    provides a concrete example of a group analysis of the EEG data from 29 healthy human
    participants, recorded during an ASSR paradigm, using the FieldTrip toolbox. First, we
    demonstrate sensor-level analysis in the time domain, allowing for a description of the
    event-related potentials (ERPs), as well as their statistical evaluation. Second, frequency
    analysis is applied to describe the spectral characteristics of the ASSR, followed by
    group level statistical analysis in the frequency domain. Third, we show how timeand
    frequency-domain analysis approaches can be combined in order to describe
    the temporal and spectral development of the ASSR. Finally, we demonstrate source
    reconstruction techniques to characterize the primary neural generators of the ASSR.
    Throughout, we pay special attention to explaining the design of the analysis pipeline
    for single subjects and for the group level analysis. The pipeline presented here can be
    adjusted to accommodate other experimental paradigms and may serve as a template
    for similar analyses.
  • Popov, V., Ostarek, M., & Tenison, C. (2018). Practices and pitfalls in inferring neural representations. NeuroImage, 174, 340-351. doi:10.1016/j.neuroimage.2018.03.041.

    Abstract

    A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation.
  • St Pourcain, B., Eaves, L. J., Ring, S. M., Fisher, S. E., Medland, S., Evans, D. M., & Smith, G. D. (2018). Developmental changes within the genetic architecture of social communication behaviour: A multivariate study of genetic variance in unrelated individuals. Biological Psychiatry, 83(7), 598-606. doi:10.1016/j.biopsych.2017.09.020.

    Abstract

    Background: Recent analyses of trait-disorder overlap suggest that psychiatric dimensions may relate to distinct sets of genes that exert their maximum influence during different periods of development. This includes analyses of social-communciation difficulties that share, depending on their developmental stage, stronger genetic links with either Autism Spectrum Disorder or schizophrenia. Here we developed a multivariate analysis framework in unrelated individuals to model directly the developmental profile of genetic influences contributing to complex traits, such as social-communication difficulties, during a ~10-year period spanning childhood and adolescence. Methods: Longitudinally assessed quantitative social-communication problems (N ≤ 5,551) were studied in participants from a UK birth cohort (ALSPAC, 8 to 17 years). Using standardised measures, genetic architectures were investigated with novel multivariate genetic-relationship-matrix structural equation models (GSEM) incorporating whole-genome genotyping information. Analogous to twin research, GSEM included Cholesky decomposition, common pathway and independent pathway models. Results: A 2-factor Cholesky decomposition model described the data best. One genetic factor was common to SCDC measures across development, the other accounted for independent variation at 11 years and later, consistent with distinct developmental profiles in trait-disorder overlap. Importantly, genetic factors operating at 8 years explained only ~50% of the genetic variation at 17 years. Conclusion: Using latent factor models, we identified developmental changes in the genetic architecture of social-communication difficulties that enhance the understanding of ASD and schizophrenia-related dimensions. More generally, GSEM present a framework for modelling shared genetic aetiologies between phenotypes and can provide prior information with respect to patterns and continuity of trait-disorder overlap
  • St Pourcain, B., Robinson, E. B., Anttila, V., Sullivan, B. B., Maller, J., Golding, J., Skuse, D., Ring, S., Evans, D. M., Zammit, S., Fisher, S. E., Neale, B. M., Anney, R., Ripke, S., Hollegaard, M. V., Werge, T., iPSYCH-SSI-Broad Autism Group, Ronald, A., Grove, J., Hougaard, D. M., Børglum, A. D. and 3 moreSt Pourcain, B., Robinson, E. B., Anttila, V., Sullivan, B. B., Maller, J., Golding, J., Skuse, D., Ring, S., Evans, D. M., Zammit, S., Fisher, S. E., Neale, B. M., Anney, R., Ripke, S., Hollegaard, M. V., Werge, T., iPSYCH-SSI-Broad Autism Group, Ronald, A., Grove, J., Hougaard, D. M., Børglum, A. D., Mortensen, P. B., Daly, M., & Davey Smith, G. (2018). ASD and schizophrenia show distinct developmental profiles in common genetic overlap with population-based social-communication difficulties. Molecular Psychiatry, 23, 263-270. doi:10.1038/mp.2016.198.

    Abstract

    Difficulties in social communication are part of the phenotypic overlap between autism spectrum disorders (ASD) and
    schizophrenia. Both conditions follow, however, distinct developmental patterns. Symptoms of ASD typically occur during early childhood, whereas most symptoms characteristic of schizophrenia do not appear before early adulthood. We investigated whether overlap in common genetic in fluences between these clinical conditions and impairments in social communication depends on
    the developmental stage of the assessed trait. Social communication difficulties were measured in typically-developing youth
    (Avon Longitudinal Study of Parents and Children,N⩽5553, longitudinal assessments at 8, 11, 14 and 17 years) using the Social
    Communication Disorder Checklist. Data on clinical ASD (PGC-ASD: 5305 cases, 5305 pseudo-controls; iPSYCH-ASD: 7783 cases,
    11 359 controls) and schizophrenia (PGC-SCZ2: 34 241 cases, 45 604 controls, 1235 trios) were either obtained through the
    Psychiatric Genomics Consortium (PGC) or the Danish iPSYCH project. Overlap in genetic in fluences between ASD and social
    communication difficulties during development decreased with age, both in the PGC-ASD and the iPSYCH-ASD sample. Genetic overlap between schizophrenia and social communication difficulties, by contrast, persisted across age, as observed within two independent PGC-SCZ2 subsamples, and showed an increase in magnitude for traits assessed during later adolescence. ASD- and schizophrenia-related polygenic effects were unrelated to each other and changes in trait-disorder links reflect the heterogeneity of
    genetic factors in fluencing social communication difficulties during childhood versus later adolescence. Thus, both clinical ASD and schizophrenia share some genetic in fluences with impairments in social communication, but reveal distinct developmental profiles in their genetic links, consistent with the onset of clinical symptoms

    Additional information

    mp2016198x1.docx
  • Pouw, W., Van Gog, T., Zwaan, R. A., Agostinho, S., & Paas, F. (2018). Co-thought gestures in children's mental problem solving: Prevalence and effects on subsequent performance. Applied Cognitive Psychology, 32(1), 66-80. doi:10.1002/acp.3380.

    Abstract

    Co-thought gestures are understudied as compared to co-speech gestures yet, may provide insight into cognitive functions of gestures that are independent of speech processes. A recent study with adults showed that co-thought gesticulation occurred spontaneously during mental preparation of problem solving. Moreover, co-thought gesturing (either spontaneous or instructed) during mental preparation was effective for subsequent solving of the Tower of Hanoi under conditions of high cognitive load (i.e., when visual working memory capacity was limited and when the task was more difficult). In this preregistered study (), we investigated whether co-thought gestures would also spontaneously occur and would aid problem-solving processes in children (N=74; 8-12years old) under high load conditions. Although children also spontaneously used co-thought gestures during mental problem solving, this did not aid their subsequent performance when physically solving the problem. If these null results are on track, co-thought gesture effects may be different in adults and children.

    Files private

    Request files
  • Quinn, S., Donnelly, S., & Kidd, E. (2018). The relationship between symbolic play and language acquisition: A meta-analytic review. Developmental Review, 49, 121-135. doi:10.1016/j.dr.2018.05.005.

    Abstract

    A developmental relationship between symbolic play and language has been long proposed, going as far back as the writings of Piaget and Vygotsky. In the current paper we build on recent qualitative reviews of the literature by reporting the first quantitative analysis of the relationship. We conducted a three-level meta-analysis of past studies that have investigated the relationship between symbolic play and language acquisition. Thirty-five studies (N = 6848) met the criteria for inclusion. Overall, we observed a significant small-to-medium association between the two domains (r = .35). Several moderating variables were included in the analyses, including: (i) study design (longitudinal, concurrent), (ii) the manner in which language was measured (comprehension, production), and (iii) the age at which this relationship is measured. The effect was weakly moderated by these three variables, but overall the association was robust, suggesting that symbolic play and language are closely related in development.

    Additional information

    Quinn_Donnelly_Kidd_2018sup.docx
  • Räsänen, O., Seshadri, S., & Casillas, M. (2018). Comparison of syllabification algorithms and training strategies for robust word count estimation across different languages and recording conditions. In Proceedings of Interspeech 2018 (pp. 1200-1204). doi:10.21437/Interspeech.2018-1047.

    Abstract

    Word count estimation (WCE) from audio recordings has a number of applications, including quantifying the amount of speech that language-learning infants hear in their natural environments, as captured by daylong recordings made with devices worn by infants. To be applicable in a wide range of scenarios and also low-resource domains, WCE tools should be extremely robust against varying signal conditions and require minimal access to labeled training data in the target domain. For this purpose, earlier work has used automatic syllabification of speech, followed by a least-squares-mapping of syllables to word counts. This paper compares a number of previously proposed syllabifiers in the WCE task, including a supervised bi-directional long short-term memory (BLSTM) network that is trained on a language for which high quality syllable annotations are available (a “high resource language”), and reports how the alternative methods compare on different languages and signal conditions. We also explore additive noise and varying-channel data augmentation strategies for BLSTM training, and show how they improve performance in both matching and mismatching languages. Intriguingly, we also find that even though the BLSTM works on languages beyond its training data, the unsupervised algorithms can still outperform it in challenging signal conditions on novel languages.
  • Ravignani, A. (2018). Darwin, sexual selection, and the origins of music. Trends in Ecology and Evolution, 33(10), 716-719. doi:10.1016/j.tree.2018.07.006.

    Abstract

    Humans devote ample time to produce and perceive music. How and why this behavioral propensity originated in our species is unknown. For centuries, speculation dominated the study of the evolutionary origins of musicality. Following Darwin’s early intuitions, recent empirical research is opening a new chapter to tackle this mystery.
  • Ravignani, A. (2018). Comment on “Temporal and spatial variation in harbor seal (Phoca vitulina L.) roar calls from southern Scandinavia” [J. Acoust. Soc. Am. 141, 1824-1834 (2017)]. The Journal of the Acoustical Society of America, 143, 504-508. doi:10.1121/1.5021770.

    Abstract

    In their recent article, Sabinsky and colleagues investigated heterogeneity in harbor seals' vocalizations. The authors found seasonal and geographical variation in acoustic parameters, warning readers that recording conditions might account for some of their results. This paper expands on the temporal aspect of the encountered heterogeneity in harbor seals' vocalizations. Temporal information is the least susceptible to variable recording conditions. Hence geographical and seasonal variability in roar timing constitutes the most robust finding in the target article. In pinnipeds, evidence of timing and rhythm in the millisecond range—as opposed to circadian and seasonal rhythms—has theoretical and interdisciplinary relevance. In fact, the study of rhythm and timing in harbor seals is particularly decisive to support or confute a cross-species hypothesis, causally linking the evolution of vocal production learning and rhythm. The results by Sabinsky and colleagues can shed light on current scientific questions beyond pinniped bioacoustics, and help formulate empirically testable predictions.
  • Ravignani, A., Chiandetti, C., & Gamba, M. (2018). L'evoluzione del ritmo. Le Scienze, (04 maggio 2018).
  • Ravignani, A., Thompson, B., Grossi, T., Delgado, T., & Kirby, S. (2018). Evolving building blocks of rhythm: How human cognition creates music via cultural transmission. Annals of the New York Academy of Sciences, 1423(1), 176-187. doi:10.1111/nyas.13610.

    Abstract

    Why does musical rhythm have the structure it does? Musical rhythm, in all its cross-cultural diversity, exhibits
    commonalities across world cultures. Traditionally, music research has been split into two fields. Some scientists
    focused onmusicality, namely the human biocognitive predispositions formusic, with an emphasis on cross-cultural
    similarities. Other scholars investigatedmusic, seen as a cultural product, focusing on the variation in worldmusical
    cultures.Recent experiments founddeep connections betweenmusicandmusicality, reconciling theseopposing views.
    Here, we address the question of how individual cognitive biases affect the process of cultural evolution of music.
    Data from two experiments are analyzed using two complementary techniques. In the experiments, participants
    hear drumming patterns and imitate them. These patterns are then given to the same or another participant to
    imitate. The structure of these initially random patterns is tracked along experimental “generations.” Frequentist
    statistics show how participants’ biases are amplified by cultural transmission, making drumming patterns more
    structured. Structure is achieved faster in transmission within rather than between participants. A Bayesian model
    approximates the motif structures participants learned and created. Our data and models suggest that individual
    biases for musicality may shape the cultural transmission of musical rhythm.

    Additional information

    nyas13610-sup-0001-suppmat.pdf
  • Ravignani, A., Thompson, B., & Filippi, P. (2018). The evolution of musicality: What can be learned from language evolution research? Frontiers in Neuroscience, 12: 20. doi:10.3389/fnins.2018.00020.

    Abstract

    Language and music share many commonalities, both as natural phenomena and as subjects of intellectual inquiry. Rather than exhaustively reviewing these connections, we focus on potential cross-pollination of methodological inquiries and attitudes. We highlight areas in which scholarship on the evolution of language may inform the evolution of music. We focus on the value of coupled empirical and formal methodologies, and on the futility of mysterianism, the declining view that the nature, origins and evolution of language cannot be addressed empirically. We identify key areas in which the evolution of language as a discipline has flourished historically, and suggest ways in which these advances can be integrated into the study of the evolution of music.
  • Ravignani, A. (2018). Spontaneous rhythms in a harbor seal pup calls. BMC Research Notes, 11: 3. doi:10.1186/s13104-017-3107-6.

    Abstract

    Objectives: Timing and rhythm (i.e. temporal structure) are crucial, though historically neglected, dimensions of animal communication. When investigating these in non-human animals, it is often difficult to balance experimental control and ecological validity. Here I present the first step of an attempt to balance the two, focusing on the timing of vocal rhythms in a harbor seal pup (Phoca vitulina). Collection of this data had a clear aim: To find spontaneous vocal rhythms in this individual in order to design individually-adapted and ecologically-relevant stimuli for a later playback experiment. Data description: The calls of one seal pup were recorded. The audio recordings were annotated using Praat, a free software to analyze vocalizations in humans and other animals. The annotated onsets and offsets of vocalizations were then imported in a Python script. The script extracted three types of timing information: the duration of calls, the intervals between calls’ onsets, and the intervals between calls’ maximum-intensity peaks. Based on the annotated data, available to download, I provide simple descriptive statistics for these temporal measures, and compare their distributions.
  • Ravignani, A., Garcia, M., Gross, S., de Reus, K., Hoeksema, N., Rubio-Garcia, A., & de Boer, B. (2018). Pinnipeds have something to say about speech and rhythm. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 399-401). Toruń, Poland: NCU Press. doi:10.12775/3991-1.095.
  • Ravignani, A., & Verhoef, T. (2018). Which melodic universals emerge from repeated signaling games?: A Note on Lumaca and Baggio (2017). Artificial Life, 24(2), 149-153. doi:10.1162/ARTL_a_00259.

    Abstract

    Music is a peculiar human behavior, yet we still know little as to why and how music emerged. For centuries, the study of music has been the sole prerogative of the humanities. Lately, however, music is being increasingly investigated by psychologists, neuroscientists, biologists, and computer scientists. One approach to studying the origins of music is to empirically test hypotheses about the mechanisms behind this structured behavior. Recent lab experiments show how musical rhythm and melody can emerge via the process of cultural transmission. In particular, Lumaca and Baggio (2017) tested the emergence of a sound system at the boundary between music and language. In this study, participants were given random pairs of signal-meanings; when participants negotiated their meaning and played a “ game of telephone ” with them, these pairs became more structured and systematic. Over time, the small biases introduced in each artificial transmission step accumulated, displaying quantitative trends, including the emergence, over the course of artificial human generations, of features resembling properties of language and music. In this Note, we highlight the importance of Lumaca and Baggio ʼ s experiment, place it in the broader literature on the evolution of language and music, and suggest refinements for future experiments. We conclude that, while psychological evidence for the emergence of proto-musical features is accumulating, complementary work is needed: Mathematical modeling and computer simulations should be used to test the internal consistency of experimentally generated hypotheses and to make new predictions.
  • Ravignani, A., Thompson, B., Lumaca, M., & Grube, M. (2018). Why do durations in musical rhythms conform to small integer ratios? Frontiers in Computational Neuroscience, 12: 86. doi:10.3389/fncom.2018.00086.

    Abstract

    One curious aspect of human timing is the organization of rhythmic patterns in small integer ratios. Behavioral and neural research has shown that adjacent time intervals in rhythms tend to be perceived and reproduced as approximate fractions of small numbers (e.g., 3/2). Recent work on iterated learning and reproduction further supports this: given a randomly timed drum pattern to reproduce, participants subconsciously transform it toward small integer ratios. The mechanisms accounting for this “attractor” phenomenon are little understood, but might be explained by combining two theoretical frameworks from psychophysics. The scalar expectancy theory describes time interval perception and reproduction in terms of Weber's law: just detectable durational differences equal a constant fraction of the reference duration. The notion of categorical perception emphasizes the tendency to perceive time intervals in categories, i.e., “short” vs. “long.” In this piece, we put forward the hypothesis that the integer-ratio bias in rhythm perception and production might arise from the interaction of the scalar property of timing with the categorical perception of time intervals, and that neurally it can plausibly be related to oscillatory activity. We support our integrative approach with mathematical derivations to formalize assumptions and provide testable predictions. We present equations to calculate durational ratios by: (i) parameterizing the relationship between durational categories, (ii) assuming a scalar timing constant, and (iii) specifying one (of K) category of ratios. Our derivations provide the basis for future computational, behavioral, and neurophysiological work to test our model.
  • Raviv, L., & Arnon, I. (2018). Systematicity, but not compositionality: Examining the emergence of linguistic structure in children and adults using iterated learning. Cognition, 181, 160-173. doi:10.1016/j.cognition.2018.08.011.

    Abstract

    Recent work suggests that cultural transmission can lead to the emergence of linguistic structure as speakers’ weak individual biases become amplified through iterated learning. However, to date no published study has demonstrated a similar emergence of linguistic structure in children. The lack of evidence from child learners constitutes a problematic
    2
    gap in the literature: if such learning biases impact the emergence of linguistic structure, they should also be found in children, who are the primary learners in real-life language transmission. However, children may differ from adults in their biases given age-related differences in general cognitive skills. Moreover, adults’ performance on iterated learning tasks may reflect existing (and explicit) linguistic biases, partially undermining the generality of the results. Examining children’s performance can also help evaluate contrasting predictions about their role in emerging languages: do children play a larger or smaller role than adults in the creation of structure? Here, we report a series of four iterated artificial language learning studies (based on Kirby, Cornish & Smith, 2008) with both children and adults, using a novel child-friendly paradigm. Our results show that linguistic structure does not emerge more readily in children compared to adults, and that adults are overall better in both language learning and in creating linguistic structure. When languages could become underspecified (by allowing homonyms), children and adults were similar in developing consistent mappings between meanings and signals in the form of structured ambiguities. However, when homonimity was not allowed, only adults created compositional structure. This study is a first step in using iterated language learning paradigms to explore child-adult differences. It provides the first demonstration that cultural transmission has a different effect on the languages produced by children and adults: While children were able to develop systematicity, their languages did not show compositionality. We focus on the relation between learning and structure creation as a possible explanation for our findings and discuss implications for children’s role in the emergence of linguistic structure.

    Additional information

    results A results B results D stimuli
  • Raviv, L., & Arnon, I. (2018). The developmental trajectory of children’s auditory and visual statistical learning abilities: Modality-based differences in the effect of age. Developmental Science, 21(4): e12593. doi:10.1111/desc.12593.

    Abstract

    Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5–12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw.

    Additional information

    Video abstract of the article
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 402-404). Toruń, Poland: NCU Press. doi:10.12775/3991-1.096.
  • Redl, T., Eerland, A., & Sanders, T. J. M. (2018). The processing of the Dutch masculine generic zijn ‘his’ across stereotype contexts: An eye-tracking study. PLoS One, 13(10): e0205903. doi:10.1371/journal.pone.0205903.

    Abstract

    Language users often infer a person’s gender when it is not explicitly mentioned. This information is included in the mental model of the described situation, giving rise to expectations regarding the continuation of the discourse. Such gender inferences can be based on two types of information: gender stereotypes (e.g., nurses are female) and masculine generics, which are grammatically masculine word forms that are used to refer to all genders in certain contexts (e.g., To each his own). In this eye-tracking experiment (N = 82), which is the first to systematically investigate the online processing of masculine generic pronouns, we tested whether the frequently used Dutch masculine generic zijn ‘his’ leads to a male bias. In addition, we tested the effect of context by introducing male, female, and neutral stereotypes. We found no evidence for the hypothesis that the generically-intended masculine pronoun zijn ‘his’ results in a male bias. However, we found an effect of stereotype context. After introducing a female stereotype, reading about a man led to an increase in processing time. However, the reverse did not hold, which parallels the finding in social psychology that men are penalized more for gender-nonconforming behavior. This suggests that language processing is not only affected by the strength of stereotype contexts; the associated disapproval of violating these gender stereotypes affects language processing, too.

    Additional information

    pone.0205903.s001.pdf data files
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Rietbergen, M., Roelofs, A., Den Ouden, H., & Cools, R. (2018). Disentangling cognitive from motor control: Influence of response modality on updating, inhibiting, and shifting. Acta Psychologica, 191, 124-130. doi:10.1016/j.actpsy.2018.09.008.

    Abstract

    It is unclear whether cognitive and motor control are parallel and interactive or serial and independent processes. According to one view, cognitive control refers to a set of modality-nonspecific processes that act on supramodal representations and precede response modality-specific motor processes. An alternative view is that cognitive control represents a set of modality-specific operations that act directly on motor-related representations, implying dependence of cognitive control on motor control. Here, we examined the influence of response modality (vocal vs. manual) on three well-established subcomponent processes of cognitive control: shifting, inhibiting, and updating. We observed effects of all subcomponent processes in reaction times. The magnitude of these effects did not differ between response modalities for shifting and inhibiting, in line with a serial, supramodal view. However, the magnitude of the updating effect differed between modalities, in line with an interactive, modality-specific view. These results suggest that updating represents a modality-specific operation that depends on motor control, whereas shifting and inhibiting represent supramodal operations that act independently of motor control.
  • Rodenas-Cuadrado, P., Mengede, J., Baas, L., Devanna, P., Schmid, T. A., Yartsev, M., Firzlaff, U., & Vernes, S. C. (2018). Mapping the distribution of language related genes FoxP1, FoxP2 and CntnaP2 in the brains of vocal learning bat species. Journal of Comparative Neurology, 526(8), 1235-1266. doi:10.1002/cne.24385.

    Abstract

    Genes including FOXP2, FOXP1 and CNTNAP2, have been implicated in human speech and language phenotypes, pointing to a role in the development of normal language-related circuitry in the brain. Although speech and language are unique human phenotypes, a comparative approach is possible by addressing language-relevant traits in animal model systems. One such trait, vocal learning, represents an essential component of human spoken language, and is shared by cetaceans, pinnipeds, elephants, some birds and bats. Given their vocal learning abilities, gregarious nature, and reliance on vocalisations for social communication and navigation, bats represent an intriguing mammalian system in which to explore language-relevant genes. We used immunohistochemistry to detail the distribution of FoxP2, FoxP1 and Cntnap2 proteins, accompanied by detailed cytoarchitectural histology in the brains of two vocal learning bat species; Phyllostomus discolor and Rousettus aegyptiacus. We show widespread expression of these genes, similar to what has been previously observed in other species, including humans. A striking difference was observed in the adult Phyllostomus discolor bat, which showed low levels of FoxP2 expression in the cortex, contrasting with patterns found in rodents and non-human primates. We created an online, open-access database within which all data can be browsed, searched, and high resolution images viewed to single cell resolution. The data presented herein reveal regions of interest in the bat brain and provide new opportunities to address the role of these language-related genes in complex vocal-motor and vocal learning behaviours in a mammalian model system.
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Rommers, J., & Federmeier, K. D. (2018). Lingering expectations: A pseudo-repetition effect for words previously expected but not presented. NeuroImage, 183, 263-272. doi:10.1016/j.neuroimage.2018.08.023.

    Abstract

    Prediction can help support rapid language processing. However, it is unclear whether prediction has downstream
    consequences, beyond processing in the moment. In particular, when a prediction is disconfirmed, does it linger,
    or is it suppressed? This study manipulated whether words were actually seen or were only expected, and probed
    their fate in memory by presenting the words (again) a few sentences later. If disconfirmed predictions linger,
    subsequent processing of the previously expected (but never presented) word should be similar to actual word
    repetition. At initial presentation, electrophysiological signatures of prediction disconfirmation demonstrated that
    participants had formed expectations. Further downstream, relative to unseen words, repeated words elicited a
    strong N400 decrease, an enhanced late positive complex (LPC), and late alpha band power decreases. Critically,
    like repeated words, words previously expected but not presented also attenuated the N400. This “pseudorepetition
    effect” suggests that disconfirmed predictions can linger at some stages of processing, and demonstrates
    that prediction has downstream consequences beyond rapid on-line processing
  • Rommers, J., & Federmeier, K. D. (2018). Predictability's aftermath: Downstream consequences of word predictability as revealed by repetition effects. Cortex, 101, 16-30. doi:10.1016/j.cortex.2017.12.018.

    Abstract

    Stimulus processing in language and beyond is shaped by context, with predictability having a
    particularly well-attested influence on the rapid processes that unfold during the presentation
    of a word. But does predictability also have downstream consequences for the quality of the
    constructed representations? On the one hand, the ease of processing predictablewordsmight
    free up time or cognitive resources, allowing for relatively thorough processing of the input. On
    the other hand, predictabilitymight allowthe systemto run in a top-down “verificationmode”,
    at the expense of thorough stimulus processing. This electroencephalogram (EEG) study
    manipulated word predictability, which reduced N400 amplitude and inter-trial phase clustering
    (ITPC), and then probed the fate of the (un)predictable words in memory by presenting
    them again. More thorough processing of predictable words should increase repetition effects,
    whereas less thorough processing should decrease them. Repetition was reflected in N400 decreases,
    late positive complex (LPC) enhancements, and late alpha/beta band power decreases.
    Critically, prior predictability tended to reduce the repetition effect on the N400, suggesting less
    priming, and eliminated the repetition effect on the LPC, suggesting a lack of episodic recollection.
    These findings converge on a top-down verification account, on which the brain processes
    more predictable input less thoroughly. More generally, the results demonstrate that
    predictability hasmultifaceted downstreamconsequences beyond processing in the moment
  • Rossi, G. (2018). Composite social actions: The case of factual declaratives in everyday interaction. Research on Language and Social Interaction, 51(4), 379-397. doi:10.1080/08351813.2018.1524562.

    Abstract

    When taking a turn at talk, a speaker normally accomplishes a sequential action such as a question, answer, complaint, or request. Sometimes, however, a turn at talk may accomplish not a single but a composite action, involving a combination of more than one action. I show that factual declaratives (e.g., “the feed drip has finished”) are recurrently used to implement composite actions consisting of both an informing and a request or, alternatively, a criticism and a request. A key determinant between these is the recipient’s epistemic access to what the speaker is describing. Factual declaratives afford a range of possible responses, which can tell us how the composite action has been understood and give us insights into its underlying structure. Evidence for the stacking of composite actions, however, is not always directly available in the response and may need to be pieced together with the help of other linguistic and contextual considerations. Data are in Italian with English translation.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rowland, C. F. (2018). The principles of scientific inquiry. Linguistic Approaches to Bilingualism, 8(6), 770-775. doi:10.1075/lab.18056.row.
  • Rubio-Fernández, P. (2018). Trying to discredit the Duplo task with a partial replication: Reply to Paulus and Kammermeier (2018). Cognitive Development, 48, 286-288. doi:10.1016/j.cogdev.2018.07.006.

    Abstract

    Kammermeier and Paulus (2018) report a partial replication of the results of Rubio-Fernández and Geurts (2013) but present their study as a failed replication. Paulus and Kammermeier (2018) insist on a negative interpretation of their findings, discrediting the Duplo task against their own empirical evidence. Here I argue that Paulus and Kammermeier may try to make an impactful contribution to the field by adding to the growing skepticism towards early Theory of Mind studies, but fail to make any significant contribution to our understanding of young children’s Theory of Mind abilities.
  • Rubio-Fernández, P. (2018). What do failed (and successful) replications with the Duplo task show? Cognitive Development, 48, 316-320. doi:10.1016/j.cogdev.2018.07.004.
  • Rubio-Fernández, P., & Jara-Ettinger, J. (2018). Joint inferences of speakers’ beliefs and referents based on how they speak. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 991-996). Austin, TX: Cognitive Science Society.

    Abstract

    For almost two decades, the poor performance observed with the so-called Director task has been interpreted as evidence of limited use of Theory of Mind in communication. Here we propose a probabilistic model of common ground in referential communication that derives three inferences from an utterance: what the speaker is talking about in a visual context, what she knows about the context, and what referential expressions she prefers. We tested our model by comparing its inferences with those made by human participants and found that it closely mirrors their judgments, whereas an alternative model compromising the hearer’s expectations of cooperativeness and efficiency reveals a worse fit to the human data. Rather than assuming that common ground is fixed in a given exchange and may or may not constrain reference resolution, we show how common ground can be inferred as part of the process of reference assignment.
  • Rubio-Fernández, P., Breheny, R., & Lee, M. W. (2003). Context-independent information in concepts: An investigation of the notion of ‘core features’. In Proceedings of the 25th Annual Conference of the Cognitive Science Society (CogSci 2003). Austin, TX: Cognitive Science Society.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Saleh, A., Beck, T., Galke, L., & Scherp, A. (2018). Performance comparison of ad-hoc retrieval models over full-text vs. titles of documents. In M. Dobreva, A. Hinze, & M. Žumer (Eds.), Maturity and Innovation in Digital Libraries: 20th International Conference on Asia-Pacific Digital Libraries, ICADL 2018, Hamilton, New Zealand, November 19-22, 2018, Proceedings (pp. 290-303). Cham, Switzerland: Springer.

    Abstract

    While there are many studies on information retrieval models using full-text, there are presently no comparison studies of full-text retrieval vs. retrieval only over the titles of documents. On the one hand, the full-text of documents like scientific papers is not always available due to, e.g., copyright policies of academic publishers. On the other hand, conducting a search based on titles alone has strong limitations. Titles are short and therefore may not contain enough information to yield satisfactory search results. In this paper, we compare different retrieval models regarding their search performance on the full-text vs. only titles of documents. We use different datasets, including the three digital library datasets: EconBiz, IREON, and PubMed. The results show that it is possible to build effective title-based retrieval models that provide competitive results comparable to full-text retrieval. The difference between the average evaluation results of the best title-based retrieval models is only 3% less than those of the best full-text-based retrieval models.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • San Roque, L., Kendrick, K. H., Norcliffe, E., & Majid, A. (2018). Universal meaning extensions of perception verbs are grounded in interaction. Cognitive Linguistics, 29, 371-406. doi:10.1515/cog-2017-0034.
  • Schaeffer, J., van Witteloostuijn, M., & Creemers, A. (2018). Article choice, theory of mind, and memory in children with high-functioning autism and children with specific language impairment. Applied Psycholinguistics, 39(1), 89-115. doi:10.1017/S0142716417000492.

    Abstract

    Previous studies show that young, typically developing (TD) children (age 5) make errors in the choice between a definite and an indefinite article. Suggested explanations for overgeneration of the definite article include failure to distinguish speaker from hearer assumptions, and for overgeneration of the indefinite article failure to draw scalar implicatures, and weak working memory. However, no direct empirical evidence for these accounts is available. In this study, 27 Dutch-speaking children with high-functioning autism, 27 children with SLI, and 27 TD children aged 5–14 were administered a pragmatic article choice test, a nonverbal theory of mind test, and three types of memory tests (phonological memory, verbal, and nonverbal working memory). The results show that the children with high-functioning autism and SLI (a) make similar errors, that is, they overgenerate the indefinite article; (b) are TD-like at theory of mind, but (c) perform significantly more poorly than the TD children on phonological memory and verbal working memory. We propose that weak memory skills prevent the integration of the definiteness scale with the preceding discourse, resulting in the failure to consistently draw the relevant scalar implicature. This in turn yields the occasional erroneous choice of the indefinite article a in definite contexts.
  • Scharenborg, O., & Merkx, D. (2018). The role of articulatory feature representation quality in a computational model of human spoken-word recognition. In Proceedings of the Machine Learning in Speech and Language Processing Workshop (MLSLP 2018).

    Abstract

    Fine-Tracker is a speech-based model of human speech
    recognition. While previous work has shown that Fine-Tracker
    is successful at modelling aspects of human spoken-word
    recognition, its speech recognition performance is not
    comparable to that of human performance, possibly due to
    suboptimal intermediate articulatory feature (AF)
    representations. This study investigates the effect of improved
    AF representations, obtained using a state-of-the-art deep
    convolutional network, on Fine-Tracker’s simulation and
    recognition performance: Although the improved AF quality
    resulted in improved speech recognition; it, surprisingly, did
    not lead to an improvement in Fine-Tracker’s simulation power.
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.

    Abstract

    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2003). Recognising 'real-life' speech with SpeM: A speech-based computational model of human speech recognition. In Eurospeech 2003 (pp. 2285-2288).

    Abstract

    In this paper, we present a novel computational model of human speech recognition – called SpeM – based on the theory underlying Shortlist. We will show that SpeM, in combination with an automatic phone recogniser (APR), is able to simulate the human speech recognition process from the acoustic signal to the ultimate recognition of words. This joint model takes an acoustic speech file as input and calculates the activation flows of candidate words on the basis of the degree of fit of the candidate words with the input. Experiments showed that SpeM outperforms Shortlist on the recognition of ‘real-life’ input. Furthermore, SpeM performs only slightly worse than an off-the-shelf full-blown automatic speech recogniser in which all words are equally probable, while it provides a transparent computationally elegant paradigm for modelling word activations in human word recognition.
  • Schijven, D., Kofink, D., Tragante, V., Verkerke, M., Pulit, S. L., Kahn, R. S., Veldink, J. H., Vinkers, C. H., Boks, M. P., & Luykx, J. J. (2018). Comprehensive pathway analyses of schizophrenia risk loci point to dysfunctional postsynaptic signaling. Schizophrenia Research, 199, 195-202. doi:10.1016/j.schres.2018.03.032.

    Abstract

    Large-scale genome-wide association studies (GWAS) have implicated many low-penetrance loci in schizophrenia. However, its pathological mechanisms are poorly understood, which in turn hampers the development of novel pharmacological treatments. Pathway and gene set analyses carry the potential to generate hypotheses about disease mechanisms and have provided biological context to genome-wide data of schizophrenia. We aimed to examine which biological processes are likely candidates to underlie schizophrenia by integrating novel and powerful pathway analysis tools using data from the largest Psychiatric Genomics Consortium schizophrenia GWAS (N=79,845) and the most recent 2018 schizophrenia GWAS (N=105,318). By applying a primary unbiased analysis (Multi-marker Analysis of GenoMic Annotation; MAGMA) to weigh the role of biological processes from the Molecular Signatures Database (MSigDB), we identified enrichment of common variants in synaptic plasticity and neuron differentiation gene sets. We supported these findings using MAGMA, Meta-Analysis Gene-set Enrichment of variaNT Associations (MAGENTA) and Interval Enrichment Analysis (INRICH) on detailed synaptic signaling pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and found enrichment in mainly the dopaminergic and cholinergic synapses. Moreover, shared genes involved in these neurotransmitter systems had a large contribution to the observed enrichment, protein products of top genes in these pathways showed more direct and indirect interactions than expected by chance, and expression profiles of these genes were largely similar among brain tissues. In conclusion, we provide strong and consistent genetics and protein-interaction informed evidence for the role of postsynaptic signaling processes in schizophrenia, opening avenues for future translational and psychopharmacological studies.
  • Schilberg, L., Engelen, T., Ten Oever, S., Schuhmann, T., De Gelder, B., De Graaf, T. A., & Sack, A. T. (2018). Phase of beta-frequency tACS over primary motor cortex modulates corticospinal excitability. Cortex, 103, 142-152. doi:10.1016/j.cortex.2018.03.001.

    Abstract

    The assessment of corticospinal excitability by means of transcranial magnetic stimulation-induced motor evoked potentials is an established diagnostic tool in neurophysiology and a widely used procedure in fundamental brain research. However, concern about low reliability of these measures has grown recently. One possible cause of high variability of MEPs under identical acquisition conditions could be the influence of oscillatory neuronal activity on corticospinal excitability. Based on research showing that transcranial alternating current stimulation can entrain neuronal oscillations we here test whether alpha or beta frequency tACS can influence corticospinal excitability in a phase-dependent manner. We applied tACS at individually calibrated alpha- and beta-band oscillation frequencies, or we applied sham tACS. Simultaneous single TMS pulses time locked to eight equidistant phases of the ongoing tACS signal evoked MEPs. To evaluate offline effects of stimulation frequency, MEP amplitudes were measured before and after tACS. To evaluate whether tACS influences MEP amplitude, we fitted one-cycle sinusoids to the average MEPs elicited at the different phase conditions of each tACS frequency. We found no frequency-specific offline effects of tACS. However, beta-frequency tACS modulation of MEPs was phase-dependent. Post hoc analyses suggested that this effect was specific to participants with low (<19 Hz) intrinsic beta frequency. In conclusion, by showing that beta tACS influences MEP amplitude in a phase-dependent manner, our results support a potential role attributed to neuronal oscillations in regulating corticospinal excitability. Moreover, our findings may be useful for the development of TMS protocols that improve the reliability of MEPs as a meaningful tool for research applications or for clinical monitoring and diagnosis. (C) 2018 Elsevier Ltd. All rights reserved.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Schiller, N. O. (2003). Metrical stress in speech production: A time course study. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 451-454). Adelaide: Causal Productions.

    Abstract

    This study investigated the encoding of metrical information during speech production in Dutch. In Experiment 1, participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., LEpel 'spoon') than for targets with final stress (e.g., liBEL 'dragon fly'; capital letters indicate stressed syllables) and revealed that the monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with bi- and trisyllabic picture names. These results demonstrate that metrical information of words is encoded rightward incrementally during phonological encoding in speech production. The results of these experiments are in line with Levelt's model of phonological encoding.
  • Schillingmann, L., Ernst, J., Keite, V., Wrede, B., Meyer, A. S., & Belke, E. (2018). AlignTool: The automatic temporal alignment of spoken utterances in German, Dutch, and British English for psycholinguistic purposes. Behavior Research Methods, 50(2), 466-489. doi:10.3758/s13428-017-1002-7.

    Abstract

    In language production research, the latency with which speakers produce a spoken response to a stimulus and the onset and offset times of words in longer utterances are key dependent variables. Measuring these variables automatically often yields partially incorrect results. However, exact measurements through the visual inspection of the recordings are extremely time-consuming. We present AlignTool, an open-source alignment tool that establishes preliminarily the onset and offset times of words and phonemes in spoken utterances using Praat, and subsequently performs a forced alignment of the spoken utterances and their orthographic transcriptions in the automatic speech recognition system MAUS. AlignTool creates a Praat TextGrid file for inspection and manual correction by the user, if necessary. We evaluated AlignTool’s performance with recordings of single-word and four-word utterances as well as semi-spontaneous speech. AlignTool performs well with audio signals with an excellent signal-to-noise ratio, requiring virtually no corrections. For audio signals of lesser quality, AlignTool still is highly functional but its results may require more frequent manual corrections. We also found that audio recordings including long silent intervals tended to pose greater difficulties for AlignTool than recordings filled with speech, which AlignTool analyzed well overall. We expect that by semi-automatizing the temporal analysis of complex utterances, AlignTool will open new avenues in language production research.
  • Schoenmakers, G.-J., & Piepers, J. (2018). Echter kan het wel. Levende Talen Magazine, 105(4), 10-13.
  • Schweinfurth, M. K., De Troy, S. E., Van Leeuwen, E. J. C., Call, J., & Haun, D. B. M. (2018). Spontaneous social tool use in Chimpanzees (Pan troglodytes). Journal of Comparative Psychology, 132(4), 455-463. doi:10.1037/com0000127.

    Abstract

    Although there is good evidence that social animals show elaborate cognitive skills to deal with others, there are few reports of animals physically using social agents and their respective responses as means to an end—social tool use. In this case study, we investigated spontaneous and repeated social tool use behavior in chimpanzees (Pan troglodytes). We presented a group of chimpanzees with an apparatus, in which pushing two buttons would release juice from a distantly located fountain. Consequently, any one individual could only either push the buttons or drink from the fountain but never push and drink simultaneously. In this scenario, an adult male attempted to retrieve three other individuals and push them toward the buttons that, if pressed, released juice from the fountain. With this strategy, the social tool user increased his juice intake 10-fold. Interestingly, the strategy was stable over time, which was possibly enabled by playing with the social tools. With over 100 instances, we provide the biggest data set on social tool use recorded among nonhuman animals so far. The repeated use of other individuals as social tools may represent a complex social skill linked to Machiavellian intelligence.
  • Seeliger, K., Fritsche, M., Güçlü, U., Schoenmakers, S., Schoffelen, J.-M., Bosch, S. E., & Van Gerven, M. A. J. (2018). Convolutional neural network-based encoding and decoding of visual object recognition in space and time. NeuroImage, 180, 253-266. doi:10.1016/j.neuroimage.2017.07.018.

    Abstract

    Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely
    investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance
    imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in
    the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain
    signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we
    addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG).
    Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled
    their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward
    sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade
    was captured by the network layer representations, where the increasingly abstract stimulus representation in the
    hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral
    stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out
    validation set of viewed objects, achieving state-of-the-art decoding accuracy.
  • Segaert, K., Mazaheri, A., & Hagoort, P. (2018). Binding language: Structuring sentences through precisely timed oscillatory mechanisms. European Journal of Neuroscience, 48(7), 2651-2662. doi:10.1111/ejn.13816.

    Abstract

    Syntactic binding refers to combining words into larger structures. Using EEG, we investigated the neural processes involved in syntactic binding. Participants were auditorily presented two-word sentences (i.e. pronoun and pseudoverb such as ‘I grush’, ‘she grushes’, for which syntactic binding can take place) and wordlists (i.e. two pseudoverbs such as ‘pob grush’, ‘pob grushes’, for which no binding occurs). Comparing these two conditions, we targeted syntactic binding while minimizing contributions of semantic binding and of other cognitive processes such as working memory. We found a converging pattern of results using two distinct analysis approaches: one approach using frequency bands as defined in previous literature, and one data-driven approach in which we looked at the entire range of frequencies between 3-30 Hz without the constraints of pre-defined frequency bands. In the syntactic binding (relative to the wordlist) condition, a power increase was observed in the alpha and beta frequency range shortly preceding the presentation of the target word that requires binding, which was maximal over frontal-central electrodes. Our interpretation is that these signatures reflect that language comprehenders expect the need for binding to occur. Following the presentation of the target word in a syntactic binding context (relative to the wordlist condition), an increase in alpha power maximal over a left lateralized cluster of frontal-temporal electrodes was observed. We suggest that this alpha increase relates to syntactic binding taking place. Taken together, our findings suggest that increases in alpha and beta power are reflections of distinct the neural processes underlying syntactic binding.
  • Seidl, A., & Johnson, E. K. (2003). Position and vowel quality effects in infant's segmentation of vowel-initial words. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2233-2236). Adelaide: Causal Productions.
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Seifart, F., Evans, N., Hammarström, H., & Levinson, S. C. (2018). Language documentation twenty-five years on. Language, 94(4), e324-e345. doi:10.1353/lan.2018.0070.

    Abstract

    This discussion note reviews responses of the linguistics profession to the grave issues of language
    endangerment identified a quarter of a century ago in the journal Language by Krauss,
    Hale, England, Craig, and others (Hale et al. 1992). Two and a half decades of worldwide research
    not only have given us a much more accurate picture of the number, phylogeny, and typological
    variety of the world’s languages, but they have also seen the development of a wide range of new
    approaches, conceptual and technological, to the problem of documenting them. We review these
    approaches and the manifold discoveries they have unearthed about the enormous variety of linguistic
    structures. The reach of our knowledge has increased by about 15% of the world’s languages,
    especially in terms of digitally archived material, with about 500 languages now
    reasonably documented thanks to such major programs as DoBeS, ELDP, and DEL. But linguists
    are still falling behind in the race to document the planet’s rapidly dwindling linguistic diversity,
    with around 35–42% of the world’s languages still substantially undocumented, and in certain
    countries (such as the US) the call by Krauss (1992) for a significant professional realignment toward
    language documentation has only been heeded in a few institutions. Apart from the need for
    an intensified documentarist push in the face of accelerating language loss, we argue that existing
    language documentation efforts need to do much more to focus on crosslinguistically comparable
    data sets, sociolinguistic context, semantics, and interpretation of text material, and on methods
    for bridging the ‘transcription bottleneck’, which is creating a huge gap between the amount we
    can record and the amount in our transcribed corpora.*
  • Sekine, K., Wood, C., & Kita, S. (2018). Gestural depiction of motion events in narrative increases symbolic distance with age. Language, Interaction and Acquisition, 9(1), 11-21. doi:10.1075/lia.15020.sek.

    Abstract

    We examined gesture representation of motion events in narratives produced by three- and nine-year-olds, and adults. Two aspects of gestural depiction were analysed: how protagonists were depicted, and how gesture space was used. We found that older groups were more likely to express protagonists as an object that a gesturing hand held and manipulated, and less likely to express protagonists with whole-body enactment gestures. Furthermore, for older groups, gesture space increasingly became less similar to narrated space. The older groups were less likely to use large gestures or gestures in the periphery of the gesture space to represent movements that were large relative to a protagonist’s body or that took place next to a protagonist. They were also less likely to produce gestures on a physical surface (e.g. table) to represent movement on a surface in narrated events. The development of gestural depiction indicates that older speakers become less immersed in the story world and start to control and manipulate story representation from an outside perspective in a bounded and stage-like gesture space. We discuss this developmental shift in terms of increasing symbolic distancing (Werner & Kaplan, 1963).
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (2003). [Review of the book Representing space in Oceania: Culture in language and mind ed. by Giovanni Bennardo]. Journal of the Polynesian Society, 112, 169-171.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.
  • Senft, G. (1985). Kilivila: Die Sprache der Trobriander. Studium Linguistik, 17/18, 127-138.
  • Senft, G. (1985). Klassifikationspartikel im Kilivila: Glossen zu ihrer morphologischen Rolle, ihrem Inventar und ihrer Funktion in Satz und Diskurs. Linguistische Berichte, 99, 373-393.
  • Senft, G. (1985). Weyeis Wettermagie: Eine ethnolinguistische Untersuchung von fünf magischen Formeln eines Wettermagiers auf den Trobriand Inseln. Zeitschrift für Ethnologie, 110(2), 67-90.
  • Senft, G. (1985). Trauer auf Trobriand: Eine ethnologisch/-linguistische Fallstudie. Anthropos, 80, 471-492.
  • Seuren, P. A. M. (1975). Autonomous syntax and prelexical rules. In S. De Vriendt, J. Dierickx, & M. Wilmet (Eds.), Grammaire générative et psychomécanique du langage: actes du colloque organisé par le Centre d'études linguistiques et littéraires de la Vrije Universiteit Brussel, Bruxelles, 29-31 mai 1974 (pp. 89-98). Paris: Didier.
  • Seuren, P. A. M. (1983). [Review of the book The inheritance of presupposition by J. Dinsmore]. Journal of Semantics, 2(3/4), 356-358. doi:10.1093/semant/2.3-4.356.
  • Seuren, P. A. M. (1983). [Review of the book Thirty million theories of grammar by J. McCawley]. Journal of Semantics, 2(3/4), 325-341. doi:10.1093/semant/2.3-4.325.
  • Seuren, P. A. M. (1983). In memoriam Jan Voorhoeve. Bijdragen tot de Taal-, Land- en Volkenkunde, 139(4), 403-406.
  • Seuren, P. A. M. (1975). Logic and language. In S. De Vriendt, J. Dierickx, & M. Wilmet (Eds.), Grammaire générative et psychomécanique du langage: actes du colloque organisé par le Centre d'études linguistiques et littéraires de la Vrije Universiteit Brussel, Bruxelles, 29-31 mai 1974 (pp. 84-87). Paris: Didier.
  • Seuren, P. A. M. (1983). Overwegingen bij de spelling van het Sranan en een spellingsvoorstel. OSO, 2(1), 67-81.
  • Seuren, P. A. M. (1985). Predicate raising and semantic transparency in Mauritian Creole. In N. Boretzky, W. Enninger, & T. Stolz (Eds.), Akten des 2. Essener Kolloquiums über "Kreolsprachen und Sprachkontakte", 29-30 Nov. 1985 (pp. 203-229). Bochum: Brockmeyer.
  • Shi, R., Werker, J., & Cutler, A. (2003). Function words in early speech perception. In Proceedings of the 15th International Congress of Phonetic Sciences (pp. 3009-3012).

    Abstract

    Three experiments examined whether infants recognise functors in phrases, and whether their representations of functors are phonetically well specified. Eight- and 13- month-old English infants heard monosyllabic lexical words preceded by real functors (e.g., the, his) versus nonsense functors (e.g., kuh); the latter were minimally modified segmentally (but not prosodically) from real functors. Lexical words were constant across conditions; thus recognition of functors would appear as longer listening time to sequences with real functors. Eightmonth- olds' listening times to sequences with real versus nonsense functors did not significantly differ, suggesting that they did not recognise real functors, or functor representations lacked phonetic specification. However, 13-month-olds listened significantly longer to sequences with real functors. Thus, somewhere between 8 and 13 months of age infants learn familiar functors and represent them with segmental detail. We propose that accumulated frequency of functors in input in general passes a critical threshold during this time.
  • Sikora, K., & Roelofs, A. (2018). Switching between spoken language-production tasks: the role of attentional inhibition and enhancement. Language, Cognition and Neuroscience, 33(7), 912-922. doi:10.1080/23273798.2018.1433864.

    Abstract

    Since Pillsbury [1908. Attention. London: Swan Sonnenschein & Co], the issue of whether attention operates through inhibition or enhancement has been on the scientific agenda. We examined whether overcoming previous attentional inhibition or enhancement is the source of asymmetrical switch costs in spoken noun-phrase production and colour-word Stroop tasks. In Experiment 1, using bivalent stimuli, we found asymmetrical costs in response times for switching between long and short phrases and between Stroop colour naming and reading. However, in Experiment 2, using bivalent stimuli for the weaker tasks (long phrases, colour naming) and univalent stimuli for the stronger tasks (short phrases, word reading), we obtained an asymmetrical switch cost for phrase production, but a symmetrical cost for Stroop. The switch cost evidence was quantified using Bayesian statistical analyses. Our findings suggest that switching between phrase types involves inhibition, whereas switching between colour naming and reading involves enhancement. Thus, the attentional mechanism depends on the language-production task involved. The results challenge theories of task switching that assume only one attentional mechanism, inhibition or enhancement, rather than both mechanisms.
  • Silva, S., Folia, V., Inácio, F., Castro, S. L., & Petersson, K. M. (2018). Modality effects in implicit artificial grammar learning: An EEG study. Brain Research, 1687, 50-59. doi:10.1016/j.brainres.2018.02.020.

    Abstract

    Recently, it has been proposed that sequence learning engages a combination of modality-specific operating networks and modality-independent computational principles. In the present study, we compared the behavioural and EEG outcomes of implicit artificial grammar learning in the visual vs. auditory modality. We controlled for the influence of surface characteristics of sequences (Associative Chunk Strength), thus focusing on the strictly structural aspects of sequence learning, and we adapted the paradigms to compensate for known frailties of the visual modality compared to audition (temporal presentation, fast presentation rate). The behavioural outcomes were similar across modalities. Favouring the idea of modality-specificity, ERPs in response to grammar violations differed in topography and latency (earlier and more anterior component in the visual modality), and ERPs in response to surface features emerged only in the auditory modality. In favour of modality-independence, we observed three common functional properties in the late ERPs of the two grammars: both were free of interactions between structural and surface influences, both were more extended in a grammaticality classification test than in a preference classification test, and both correlated positively and strongly with theta event-related-synchronization during baseline testing. Our findings support the idea of modality-specificity combined with modality-independence, and suggest that memory for visual vs. auditory sequences may largely contribute to cross-modal differences.
  • Sjerps, M. J., Zhang, C., & Peng, G. (2018). Lexical Tone is Perceived Relative to Locally Surrounding Context, Vowel Quality to Preceding Context. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 914-924. doi:10.1037/xhp0000504.

    Abstract

    Important speech cues such as lexical tone and vowel quality are perceptually contrasted to the distribution of those same cues in surrounding contexts. However, it is unclear whether preceding and following contexts have similar influences, and to what extent those influences are modulated by the auditory history of previous trials. To investigate this, Cantonese participants labeled sounds from (a) a tone continuum (mid- to high-level), presented with a context that had raised or lowered F0 values and (b) a vowel quality continuum (/u/ to /o/), where the context had raised or lowered F1 values. Contexts with high or low F0/F1 were presented in separate blocks or intermixed in 1 block. Contexts were presented following (Experiment 1) or preceding the target continuum (Experiment 2). Contrastive effects were found for both tone and vowel quality (e.g., decreased F0 values in contexts lead to more high tone target judgments and vice versa). Importantly, however, lexical tone was only influenced by F0 in immediately preceding and following contexts. Vowel quality was only influenced by the F1 in preceding contexts, but this extended to contexts from preceding trials. Contextual influences on tone and vowel quality are qualitatively different, which has important implications for understanding the mechanism of context effects in speech perception.
  • Slone, L. K., Abney, D. H., Borjon, J. I., Chen, C.-h., Franchak, J. M., Pearcy, D., Suarez-Rivera, C., Xu, T. L., Zhang, Y., Smith, L. B., & Yu, C. (2018). Gaze in action: Head-mounted eye tracking of children's dynamic visual attention during naturalistic behavior. Journal of Visualized Experiments, (141): e58496. doi:10.3791/58496.

    Abstract

    Young children's visual environments are dynamic, changing moment-by-moment as children physically and visually explore spaces and objects and interact with people around them. Head-mounted eye tracking offers a unique opportunity to capture children's dynamic egocentric views and how they allocate visual attention within those views. This protocol provides guiding principles and practical recommendations for researchers using head-mounted eye trackers in both laboratory and more naturalistic settings. Head-mounted eye tracking complements other experimental methods by enhancing opportunities for data collection in more ecologically valid contexts through increased portability and freedom of head and body movements compared to screen-based eye tracking. This protocol can also be integrated with other technologies, such as motion tracking and heart-rate monitoring, to provide a high-density multimodal dataset for examining natural behavior, learning, and development than previously possible. This paper illustrates the types of data generated from head-mounted eye tracking in a study designed to investigate visual attention in one natural context for toddlers: free-flowing toy play with a parent. Successful use of this protocol will allow researchers to collect data that can be used to answer questions not only about visual attention, but also about a broad range of other perceptual, cognitive, and social skills and their development.
  • De Smedt, F., Merchie, E., Barendse, M. T., Rosseel, Y., De Naeghel, J., & Van Keer, H. (2018). Cognitive and motivational challenges in writing: Studying the relation with writing performance across students' gender and achievement level. Reading Research Quarterly, 53(2), 249-272. doi:10.1002/rrq.193.

    Abstract

    Abstract In the past, several assessment reports on writing repeatedly showed that elementary school students do not develop the essential writing skills to be successful in school. In this respect, prior research has pointed to the fact that cognitive and motivational challenges are at the root of the rather basic level of elementary students' writing performance. Additionally, previous research has revealed gender and achievement-level differences in elementary students' writing. In view of providing effective writing instruction for all students to overcome writing difficulties, the present study provides more in-depth insight into (a) how cognitive and motivational challenges mediate and correlate with students' writing performance and (b) whether and how these relations vary for boys and girls and for writers of different achievement levels. In the present study, 1,577 fifth- and sixth-grade students completed questionnaires regarding their writing self-efficacy, writing motivation, and writing strategies. In addition, half of the students completed two writing tests, respectively focusing on the informational or narrative text genre. Based on multiple group structural equation modeling (MG-SEM), we put forward two models: a MG-SEM model for boys and girls and a MG-SEM model for low, average, and high achievers. The results underline the importance of studying writing models for different groups of students in order to gain more refined insight into the complex interplay between motivational and cognitive challenges related to students' writing performance.
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Smulders, F. T. Y., Ten Oever, S., Donkers, F. C. L., Quaedflieg, C. W. E. M., & Van de Ven, V. (2018). Single-trial log transformation is optimal in frequency analysis of resting EEG alpha. European Journal of Neuroscience, 48(7), 2585-2598. doi:10.1111/ejn.13854.

    Abstract

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2min of eyes-closed and 2min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude.
  • Snijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A. and 59 moreSnijders Blok, L., Rousseau, J., Twist, J., Ehresmann, S., Takaku, M., Venselaar, H., Rodan, L. H., Nowak, C. B., Douglas, J., Swoboda, K. J., Steeves, M. A., Sahai, I., Stumpel, C. T. R. M., Stegmann, A. P. A., Wheeler, P., Willing, M., Fiala, E., Kochhar, A., Gibson, W. T., Cohen, A. S. A., Agbahovbe, R., Innes, A. M., Au, P. Y. B., Rankin, J., Anderson, I. J., Skinner, S. A., Louie, R. J., Warren, H. E., Afenjar, A., Keren, B., Nava, C., Buratti, J., Isapof, A., Rodriguez, D., Lewandowski, R., Propst, J., Van Essen, T., Choi, M., Lee, S., Chae, J. H., Price, S., Schnur, R. E., Douglas, G., Wentzensen, I. M., Zweier, C., Reis, A., Bialer, M. G., Moore, C., Koopmans, M., Brilstra, E. H., Monroe, G. R., Van Gassen, K. L. I., Van Binsbergen, E., Newbury-Ecob, R., Bownass, L., Bader, I., Mayr, J. A., Wortmann, S. B., Jakielski, K. J., Strand, E. A., Kloth, K., Bierhals, T., The DDD study, Roberts, J. D., Petrovich, R. M., Machida, S., Kurumizaka, H., Lelieveld, S., Pfundt, R., Jansen, S., Derizioti, P., Faivre, L., Thevenon, J., Assoum, M., Shriberg, L., Kleefstra, T., Brunner, H. G., Wade, P. A., Fisher, S. E., & Campeau, P. M. (2018). CHD3 helicase domain mutations cause a neurodevelopmental syndrome with macrocephaly and impaired speech and language. Nature Communications, 9: 4619. doi:10.1038/s41467-018-06014-6.

    Abstract

    Chromatin remodeling is of crucial importance during brain development. Pathogenic
    alterations of several chromatin remodeling ATPases have been implicated in neurodevelopmental
    disorders. We describe an index case with a de novo missense mutation in CHD3,
    identified during whole genome sequencing of a cohort of children with rare speech disorders.
    To gain a comprehensive view of features associated with disruption of this gene, we use a
    genotype-driven approach, collecting and characterizing 35 individuals with de novo CHD3
    mutations and overlapping phenotypes. Most mutations cluster within the ATPase/helicase
    domain of the encoded protein. Modeling their impact on the three-dimensional structure
    demonstrates disturbance of critical binding and interaction motifs. Experimental assays with
    six of the identified mutations show that a subset directly affects ATPase activity, and all but
    one yield alterations in chromatin remodeling. We implicate de novo CHD3 mutations in a
    syndrome characterized by intellectual disability, macrocephaly, and impaired speech and
    language.
  • Snijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M. and 11 moreSnijders Blok, L., Hiatt, S. M., Bowling, K. M., Prokop, J. W., Engel, K. L., Cochran, J. N., Bebin, E. M., Bijlsma, E. K., Ruivenkamp, C. A. L., Terhal, P., Simon, M. E. H., Smith, R., Hurst, J. A., The DDD study, MCLaughlin, H., Person, R., Crunk, A., Wangler, M. F., Streff, H., Symonds, J. D., Zuberi, S. M., Elliott, K. S., Sanders, V. R., Masunga, A., Hopkin, R. J., Dubbs, H. A., Ortiz-Gonzalez, X. R., Pfundt, R., Brunner, H. G., Fisher, S. E., Kleefstra, T., & Cooper, G. M. (2018). De novo mutations in MED13, a component of the Mediator complex, are associated with a novel neurodevelopmental disorder. Human Genetics, 137(5), 375-388. doi:10.1007/s00439-018-1887-y.

    Abstract

    Many genetic causes of developmental delay and/or intellectual disability (DD/ID) are extremely rare, and robust discovery of these requires both large-scale DNA sequencing and data sharing. Here we describe a GeneMatcher collaboration which led to a cohort of 13 affected individuals harboring protein-altering variants, 11 of which are de novo, in MED13; the only inherited variant was transmitted to an affected child from an affected mother. All patients had intellectual disability and/or developmental delays, including speech delays or disorders. Other features that were reported in two or more patients include autism spectrum disorder, attention deficit hyperactivity disorder, optic nerve abnormalities, Duane anomaly, hypotonia, mild congenital heart abnormalities, and dysmorphisms. Six affected individuals had mutations that are predicted to truncate the MED13 protein, six had missense mutations, and one had an in-frame-deletion of one amino acid. Out of the seven non-truncating mutations, six clustered in two specific locations of the MED13 protein: an N-terminal and C-terminal region. The four N-terminal clustering mutations affect two adjacent amino acids that are known to be involved in MED13 ubiquitination and degradation, p.Thr326 and p.Pro327. MED13 is a component of the CDK8-kinase module that can reversibly bind Mediator, a multi-protein complex that is required for Polymerase II transcription initiation. Mutations in several other genes encoding subunits of Mediator have been previously shown to associate with DD/ID, including MED13L, a paralog of MED13. Thus, our findings add MED13 to the group of CDK8-kinase module-associated disease genes
  • Speed, L. J., & Majid, A. (2018). An exception to mental simulation: No evidence for embodied odor language. Cognitive Science, 42(4), 1146-1178. doi:10.1111/cogs.12593.

    Abstract

    Do we mentally simulate olfactory information? We investigated mental simulation of odors and sounds in two experiments. Participants retained a word while they smelled an odor or heard a sound, then rated odor/sound intensity and recalled the word. Later odor/sound recognition was also tested, and pleasantness and familiarity judgments were collected. Word recall was slower when the sound and sound-word mismatched (e.g., bee sound with the word typhoon). Sound recognition was higher when sounds were paired with a match or near-match word (e.g., bee sound with bee or buzzer). This indicates sound-words are mentally simulated. However, using the same paradigm no memory effects were observed for odor. Instead it appears odor-words only affect lexical-semantic representations, demonstrated by higher ratings of odor intensity and pleasantness when an odor was paired with a match or near-match word (e.g., peach odor with peach or mango). These results suggest fundamental differences in how odor and sound-words are represented.

    Additional information

    cogs12593-sup-0001-SupInfo.docx

Share this page