Publications

Displaying 201 - 300 of 303
  • Montero-Melis, G., & Jaeger, T. F. (2020). Changing expectations mediate adaptation in L2 production. Bilingualism: Language and Cognition, 23(3), 602-617. doi:10.1017/S1366728919000506.

    Abstract


    Native language (L1) processing draws on implicit expectations. An open question is whether non-native learners of a second language (L2) similarly draw on expectations, and whether these expectations are based on learners’ L1 or L2 knowledge. We approach this question by studying inverse preference effects on lexical encoding. L1 and L2 speakers of Spanish described motion events, while they were either primed to express path, manner, or neither. In line with other work, we find that L1 speakers adapted more strongly after primes that are unexpected in their L1. For L2 speakers, adaptation depended on their L2 proficiency: The least proficient speakers exhibited the inverse preference effect on adaptation based on what was unexpected in their L1; but the more proficient speakers were, the more they exhibited inverse preference effects based on what was unexpected in the L2. We discuss implications for L1 transfer and L2 acquisition.
  • Montero-Melis, G., Isaksson, P., Van Paridon, J., & Ostarek, M. (2020). Does using a foreign language reduce mental imagery? Cognition, 196: 104134. doi:10.1016/j.cognition.2019.104134.

    Abstract

    In a recent article, Hayakawa and Keysar (2018) propose that mental imagery is less vivid when evoked in a foreign than in a native language. The authors argue that reduced mental imagery could even account for moral foreign language effects, whereby moral choices become more utilitarian when made in a foreign language. Here we demonstrate that Hayakawa and Keysar's (2018) key results are better explained by reduced language comprehension in a foreign language than by less vivid imagery. We argue that the paradigm used in Hayakawa and Keysar (2018) does not provide a satisfactory test of reduced imagery and we discuss an alternative paradigm based on recent experimental developments.

    Additional information

    Supplementary data and scripts
  • Mudd, K., Lutzenberger, H., De Vos, C., Fikkert, P., Crasborn, O., & De Boer, B. (2020). The effect of sociolinguistic factors on variation in the Kata Kolok lexicon. Asia-Pacific Language Variation, 6(1), 53-88. doi:10.1075/aplv.19009.mud.

    Abstract

    Sign languages can be categorized as shared sign languages or deaf community sign languages, depending on the context in which they emerge. It has been suggested that shared sign languages exhibit more variation in the expression of everyday concepts than deaf community sign languages (Meir, Israel, Sandler, Padden, & Aronoff, 2012). For deaf community sign languages, it has been shown that various sociolinguistic factors condition this variation. This study presents one of the first in-depth investigations of how sociolinguistic factors (deaf status, age, clan, gender and having a deaf family member) affect lexical variation in a shared sign language, using a picture description task in Kata Kolok. To study lexical variation in Kata Kolok, two methodologies are devised: the identification of signs by underlying iconic motivation and mapping, and a way to compare individual repertoires of signs by calculating the lexical distances between participants. Alongside presenting novel methodologies to study this type of sign language, we present preliminary evidence of sociolinguistic factors that may influence variation in the Kata Kolok lexicon.
  • Muhinyi, A., Hesketh, A., Stewart, A. J., & Rowland, C. F. (2020). Story choice matters for caregiver extra-textual talk during shared reading with preschoolers. Journal of Child Language, 47(3), 633-654. doi:10.1017/S0305000919000783.

    Abstract



    This study aimed to examine the influence of the complexity of the story-book on caregiver extra-textual talk (i.e., interactions beyond text reading) during shared reading with preschool-age children. Fifty-three mother–child dyads (3;00–4;11) were video-recorded sharing two ostensibly similar picture-books: a simple story (containing no false belief) and a complex story (containing a false belief central to the plot, which provided content that was more challenging for preschoolers to understand). Book-reading interactions were transcribed and coded. Results showed that the complex stories facilitated more extra-textual talk from mothers, and a higher quality of extra-textual talk (as indexed by linguistic richness and level of abstraction). Although the type of story did not affect the number of questions mothers posed, more elaborative follow-ups on children's responses were provided by mothers when sharing complex stories. Complex stories may facilitate more and linguistically richer caregiver extra-textual talk, having implications for preschoolers’ developing language abilities.
  • Nakamoto, T., Hatsuta, S., Yagi, S., Verdonschot, R. G., Taguchi, A., & Kakimoto, N. (2020). Computer-aided diagnosis system for osteoporosis based on quantitative evaluation of mandibular lower border porosity using panoramic radiographs. Dentomaxillofacial Radiology, 49(4): 20190481. doi:10.1259/dmfr.20190481.

    Abstract

    Objectives: A new computer-aided screening system for osteoporosis using panoramic radiographs was developed. The conventional system could detect porotic changes within the lower border of the mandible, but its severity could not be evaluated. Our aim was to enable the system to measure severity by implementing a linear bone resorption severity index (BRSI) based on the cortical bone shape.
    Methods: The participants were 68 females (>50 years) who underwent panoramic radiography and lumbar spine bone density measurements. The new system was designed to extract the lower border of the mandible as region of interests and convert them into morphological skeleton line images. The total perimeter length of the skeleton lines was defined as the BRSI. 40 images were visually evaluated for the presence of cortical bone porosity. The correlation between visual evaluation and BRSI of the participants, and the optimal threshold value of BRSI for new system were investigated through a receiver operator characteristic analysis. The diagnostic performance of the new system was evaluated by comparing the results from new system and lumbar bone density tests using 28 participants.
    Results: BRSI and lumbar bone density showed a strong negative correlation (p < 0.01). BRSI showed a strong correlation with visual evaluation. The new system showed high diagnostic efficacy with sensitivity of 90.9%, specificity of 64.7%, and accuracy of 75.0%.
    Conclusions: The new screening system is able to quantitatively evaluate mandibular cortical porosity. This allows for preventive screening for osteoporosis thereby enhancing clinical prospects.
  • Nieuwland, M. S., Arkhipova, Y., & Rodríguez-Gómez, P. (2020). Anticipating words during spoken discourse comprehension: A large-scale, pre-registered replication study using brain potentials. Cortex, 133, 1-36. doi:10.1016/j.cortex.2020.09.007.

    Abstract

    Numerous studies report brain potential evidence for the anticipation of specific words during language comprehension. In the most convincing demonstrations, highly predictable nouns exert an influence on processing even before they appear to a reader or listener, as indicated by the brain's neural response to a prenominal adjective or article when it mismatches the expectations about the upcoming noun. However, recent studies suggest that some well-known demonstrations of prediction may be hard to replicate. This could signal the use of data-contingent analysis, but might also mean that readers and listeners do not always use prediction-relevant information in the way that psycholinguistic theories typically suggest. To shed light on this issue, we performed a close replication of one of the best-cited ERP studies on word anticipation (Van Berkum, Brown, Zwitserlood, Kooijman & Hagoort, 2005; Experiment 1), in which participants listened to Dutch spoken mini-stories. In the original study, the marking of grammatical gender on pre-nominal adjectives (‘groot/grote’) elicited an early positivity when mismatching the gender of an unseen, highly predictable noun, compared to matching gender. The current pre-registered study involved that same manipulation, but used a novel set of materials twice the size of the original set, an increased sample size (N = 187), and Bayesian mixed-effects model analyses that better accounted for known sources of variance than the original. In our study, mismatching gender elicited more negative voltage than matching gender at posterior electrodes. However, this N400-like effect was small in size and lacked support from Bayes Factors. In contrast, we successfully replicated the original's noun effects. While our results yielded some support for prediction, they do not support the Van Berkum et al. effect and highlight the risks associated with commonly employed data-contingent analyses and small sample sizes. Our results also raise the question whether Dutch listeners reliably or consistently use adjectival inflection information to inform their noun predictions.
  • Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.

    Abstract

    Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning.
  • Nieuwland, M. S., & Kazanina, N. (2020). The neural basis of linguistic prediction: Introduction to the special issue. Neuropsychologia, 146: 107532. doi:10.1016/j.neuropsychologia.2020.107532.
  • Noble, C., Cameron-Faulkner, T., Jessop, A., Coates, A., Sawyer, H., Taylor-Ims, R., & Rowland, C. F. (2020). The impact of interactive shared book reading on children's language skills: A randomized controlled trial. Journal of Speech, Language, and Hearing Research, 63(6), 1878-1897. doi:10.1044/2020_JSLHR-19-00288.

    Abstract

    Purpose: Research has indicated that interactive shared
    book reading can support a wide range of early language
    skills and that children who are read to regularly in the early
    years learn language faster, enter school with a larger
    vocabulary, and become more successful readers at school.
    Despite the large volume of research suggesting interactive
    shared reading is beneficial for language development, two
    fundamental issues remain outstanding: whether shared
    book reading interventions are equally effective (a) for children
    from all socioeconomic backgrounds and (b) for a range of
    language skills.
    Method: To address these issues, we conducted a
    randomized controlled trial to investigate the effects of two
    6-week interactive shared reading interventions on a
    range of language skills in children across the socioeconomic
    spectrum. One hundred and fifty children aged between
    2;6 and 3;0 (years;months) were randomly assigned to one

    of three conditions: a pause reading, a dialogic reading, or
    an active shared reading control condition.
    Results: The findings indicated that the interventions were
    effective at changing caregiver reading behaviors. However,
    the interventions did not boost children’s language skills
    over and above the effect of an active reading control
    condition. There were also no effects of socioeconomic status.
    Conclusion: This randomized controlled trial showed
    that caregivers from all socioeconomic backgrounds
    successfully adopted an interactive shared reading style.
    However, while the interventions were effective at increasing
    caregivers’ use of interactive shared book reading behaviors,
    this did not have a significant impact on the children’s
    language skills. The findings are discussed in terms of
    practical implications and future research.

    Additional information

    Supplemental Material
  • Ohlerth, A.-K., Valentin, A., Vergani, F., Ashkan, K., & Bastiaanse, R. (2020). The verb and noun test for peri-operative testing (VAN-POP): Standardized language tests for navigated transcranial magnetic stimulation and direct electrical stimulation. Acta Neurochirurgica, (2), 397-406. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background

    Protocols for intraoperative language mapping with direct electrical stimulation (DES) often include various language tasks triggering both nouns and verbs in sentences. Such protocols are not readily available for navigated transcranial magnetic stimulation (nTMS), where only single word object naming is generally used. Here, we present the development, norming, and standardization of the verb and noun test for peri-operative testing (VAN-POP) that measures language skills more extensively.
    Methods

    The VAN-POP tests noun and verb retrieval in sentence context. Items are marked and balanced for several linguistic factors known to influence word retrieval. The VAN-POP was administered in English, German, and Dutch under conditions that are used for nTMS and DES paradigms. For each language, 30 speakers were tested.
    Results

    At least 50 items per task per language were named fluently and reached a high naming agreement.
    Conclusion

    The protocol proved to be suitable for pre- and intraoperative language mapping with nTMS and DES.
  • Ortega, G., Ozyurek, A., & Peeters, D. (2020). Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 403-415. doi:10.1037/xlm0000729.

    Abstract

    When learning a second spoken language, cognates, words overlapping in form and meaning with one’s native language, help breaking into the language one wishes to acquire. But what happens when the to-be-acquired second language is a sign language? We tested whether hearing nonsigners rely on their gestural repertoire at first exposure to a sign language. Participants saw iconic signs with high and low overlap with the form of iconic gestures while electrophysiological brain activity was recorded. Upon first exposure, signs with low overlap with gestures elicited enhanced positive amplitude in the P3a component compared to signs with high overlap. This effect disappeared after a training session. We conclude that nonsigners generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures, even without having to produce them. Learners thus draw from any available semiotic resources when acquiring a second language, and not only from their linguistic experience
  • Ortega, G., & Ozyurek, A. (2020). Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behavior Research Methods, 52, 51-67. doi:10.3758/s13428-019-01204-6.

    Abstract

    An unprecedented number of empirical studies have shown that iconic gestures—those that mimic the sensorimotor attributes of a referent—contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture–meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture’s mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
  • Ortega, G., & Ozyurek, A. (2020). Types of iconicity and combinatorial strategies distinguish semantic categories in silent gesture. Language and Cognition, 12(1), 84-113. doi:10.1017/langcog.2019.28.

    Abstract

    In this study we explore whether different types of iconic gestures
    (i.e., acting, drawing, representing) and their combinations are used
    systematically to distinguish between different semantic categories in
    production and comprehension. In Study 1, we elicited silent gestures
    from Mexican and Dutch participants to represent concepts from three
    semantic categories: actions, manipulable objects, and non-manipulable
    objects. Both groups favoured the acting strategy to represent actions and
    manipulable objects; while non-manipulable objects were represented
    through the drawing strategy. Actions elicited primarily single gestures
    whereas objects elicited combinations of different types of iconic gestures
    as well as pointing. In Study 2, a different group of participants were
    shown gestures from Study 1 and were asked to guess their meaning.
    Single-gesture depictions for actions were more accurately guessed than
    for objects. Objects represented through two-gesture combinations (e.g.,
    acting + drawing) were more accurately guessed than objects represented
    with a single gesture. We suggest iconicity is exploited to make direct
    links with a referent, but when it lends itself to ambiguity, individuals
    resort to combinatorial structures to clarify the intended referent.
    Iconicity and the need to communicate a clear signal shape the structure
    of silent gestures and this in turn supports comprehension.
  • Peeters, D. (2020). Bilingual switching between languages and listeners: Insights from immersive virtual reality. Cognition, 195: 104107. doi:10.1016/j.cognition.2019.104107.

    Abstract

    Perhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.

    Additional information

    Supplementary data
  • Persson, J., Szalisznyó, K., Antoni, G., Wall, A., Fällmar, D., Zora, H., & Bodén, R. (2020). Phosphodiesterase 10A levels are related to striatal function in schizophrenia: a combined positron emission tomography and functional magnetic resonance imaging study. European Archives of Psychiatry and Clinical Neuroscience, 270(4), 451-459. doi:10.1007/s00406-019-01021-0.

    Abstract

    Pharmacological inhibition of phosphodiesterase 10A (PDE10A) is being investigated as a treatment option in schizophrenia. PDE10A acts postsynaptically on striatal dopamine signaling by regulating neuronal excitability through its inhibition of cyclic adenosine monophosphate (cAMP), and we recently found it to be reduced in schizophrenia compared to controls. Here, this finding of reduced PDE10A in schizophrenia was followed up in the same sample to investigate the effect of reduced striatal PDE10A on the neural and behavioral function of striatal and downstream basal ganglia regions. A positron emission tomography (PET) scan with the PDE10A ligand [11C]Lu AE92686 was performed, followed by a 6 min resting-state magnetic resonance imaging (MRI) scan in ten patients with schizophrenia. To assess the relationship between striatal function and neurophysiological and behavioral functioning, salience processing was assessed using a mismatch negativity paradigm, an auditory event-related electroencephalographic measure, episodic memory was assessed using the Rey auditory verbal learning test (RAVLT) and executive functioning using trail-making test B. Reduced striatal PDE10A was associated with increased amplitude of low-frequency fluctuations (ALFF) within the putamen and substantia nigra, respectively. Higher ALFF in the substantia nigra, in turn, was associated with lower episodic memory performance. The findings are in line with a role for PDE10A in striatal functioning, and suggest that reduced striatal PDE10A may contribute to cognitive symptoms in schizophrenia.
  • Postema, M., Carrion Castillo, A., Fisher, S. E., Vingerhoets, G., & Francks, C. (2020). The genetics of situs inversus without primary ciliary dyskinesia. Scientific Reports, 10: 3677. doi:10.1038/s41598-020-60589-z.

    Abstract

    Situs inversus (SI), a left-right mirror reversal of the visceral organs, can occur with recessive Primary Ciliary Dyskinesia (PCD). However, most people with SI do not have PCD, and the etiology of their condition remains poorly studied. We sequenced the genomes of 15 people with SI, of which six had PCD, as well as 15 controls. Subjects with non-PCD SI in this sample had an elevated rate of left-handedness (five out of nine), which suggested possible developmental mechanisms linking brain and body laterality. The six SI subjects with PCD all had likely recessive mutations in genes already known to cause PCD. Two non-PCD SI cases also had recessive mutations in known PCD genes, suggesting reduced penetrance for PCD in some SI cases. One non-PCD SI case had recessive mutations in PKD1L1, and another in CFAP52 (also known as WDR16). Both of these genes have previously been linked to SI without PCD. However, five of the nine non-PCD SI cases, including three of the left-handers in this dataset, had no obvious monogenic basis for their condition. Environmental influences, or possible random effects in early development, must be considered.

    Additional information

    Supplementary information
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Reply to Ravignani and Kotz: Physical impulses from upper-limb movements impact the respiratory–vocal system. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23225-23226. doi:10.1073/pnas.2015452117.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Acoustic information about upper limb movement in voicing. Proceedings of the National Academy of Sciences of the United States of America, 117(21), 11364-11367. doi:10.1073/pnas.2004163117.

    Abstract

    We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear but not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates
    recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.
  • Pouw, W., Wassenburg, S. I., Hostetter, A. B., De Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84(4), 966-980. doi:10.1007/s00426-018-1128-y.

    Abstract

    Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability.
    This pre-registered study (https ://www.osf.io/9uh6q /) was designed to explore how gestures affect memory for sensorimotor
    information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced
    as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory
    perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on
    whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase
    the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large
    objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture
    or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding
    them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we
    found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’
    heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of
    the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed
    by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines.
  • Pouw, W., Harrison, S. J., Esteve-Gibert, N., & Dixon, J. A. (2020). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. The Journal of the Acoustical Society of America, 148(3): 1231. doi:10.1121/10.0001730.

    Abstract

    Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.
    ACKNOWLEDGMENTS

    Additional information

    Link to Preprint on OSF
  • Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301-319. doi:10.1080/0163853X.2019.1678967.

    Abstract

    We introduce applications of established methods in time-series and network
    analysis that we jointly apply here for the kinematic study of gesture
    ensembles. We define a gesture ensemble as the set of gestures produced
    during discourse by a single person or a group of persons. Here we are
    interested in how gestures kinematically relate to one another. We use
    a bivariate time-series analysis called dynamic time warping to assess how
    similar each gesture is to other gestures in the ensemble in terms of their
    velocity profiles (as well as studying multivariate cases with gesture velocity
    and speech amplitude envelope profiles). By relating each gesture event to
    all other gesture events produced in the ensemble, we obtain a weighted
    matrix that essentially represents a network of similarity relationships. We
    can therefore apply network analysis that can gauge, for example, how
    diverse or coherent certain gestures are with respect to the gesture ensemble.
    We believe these analyses promise to be of great value for gesture
    studies, as we can come to understand how low-level gesture features
    (kinematics of gesture) relate to the higher-order organizational structures
    present at the level of discourse.

    Additional information

    Open Data OSF
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2020). Gesture–speech physics: The biomechanical basis for the emergence of gesture–speech synchrony. Journal of Experimental Psychology: General, 149(2), 391-404. doi:10.1037/xge0000646.

    Abstract

    The phenomenon of gesture–speech synchrony involves tight coupling of prosodic contrasts in gesture
    movement (e.g., peak velocity) and speech (e.g., peaks in fundamental frequency; F0). Gesture–speech
    synchrony has been understood as completely governed by sophisticated neural-cognitive mechanisms.
    However, gesture–speech synchrony may have its original basis in the resonating forces that travel through the
    body. In the current preregistered study, movements with high physical impact affected phonation in line with
    gesture–speech synchrony as observed in natural contexts. Rhythmic beating of the arms entrained phonation
    acoustics (F0 and the amplitude envelope). Such effects were absent for a condition with low-impetus
    movements (wrist movements) and a condition without movement. Further, movement–phonation synchrony
    was more pronounced when participants were standing as opposed to sitting, indicating a mediating role for
    postural stability. We conclude that gesture–speech synchrony has a biomechanical basis, which will have
    implications for our cognitive, ontogenetic, and phylogenetic understanding of multimodal language.
  • Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723-740. doi:10.3758/s13428-019-01271-9.

    Abstract

    There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech.
  • Preisig, B., Sjerps, M. J., Hervais-Adelman, A., Kösem, A., Hagoort, P., & Riecke, L. (2020). Bilateral gamma/delta transcranial alternating current stimulation affects interhemispheric speech sound integration. Journal of Cognitive Neuroscience, 32(7), 1242-1250. doi:10.1162/jocn_a_01498.

    Abstract

    Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.
  • Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2020). Alignment in multimodal interaction: An integrative framework. Cognitive Science, 44(11): e12911. doi:10.1111/cogs.12911.

    Abstract

    When people are engaged in social interaction, they can repeat aspects of each other’s communicative behavior, such as words or gestures. This kind of behavioral alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), while other dimensions remain understudied. This hampers theory testing and building, which requires a well‐defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: time, sequence, meaning, form, and modality. We show how assumptions regarding the underlying mechanism of alignment (placed along the continuum of priming vs. grounding) pattern together with operationalizations in terms of the five dimensions. This integrative framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment, and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses regarding alignment.
  • Rasenberg, M., Rommers, J., & Van Bergen, G. (2020). Anticipating predictability: An ERP investigation of expectation-managing discourse markers in dialogue comprehension. Language, Cognition and Neuroscience, 35(1), 1-16. doi:10.1080/23273798.2019.1624789.

    Abstract

    n two ERP experiments, we investigated how the Dutch discourse markers eigenlijk “actually”, signalling expectation disconfirmation, and inderdaad “indeed”, signalling expectation confirmation, affect incremental dialogue comprehension. We investigated their effects on the processing of subsequent (un)predictable words, and on the quality of word representations in memory. Participants read dialogues with (un)predictable endings that followed a discourse marker (eigenlijk in Experiment 1, inderdaad in Experiment 2) or a control adverb. We found no strong evidence that discourse markers modulated online predictability effects elicited by subsequently read words. However, words following eigenlijk elicited an enhanced posterior post-N400 positivity compared with words following an adverb regardless of their predictability, potentially reflecting increased processing costs associated with pragmatically driven discourse updating. No effects of inderdaad were found on online processing, but inderdaad seemed to influence memory for (un)predictable dialogue endings. These findings nuance our understanding of how pragmatic markers affect incremental language comprehension.

    Additional information

    plcp_a_1624789_sm6686.docx
  • Ravignani, A., & Kotz, S. (2020). Breathing, voice and synchronized movement. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23223-23224. doi:10.1073/pnas.2011402117.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2020). The role of social network structure in the emergence of linguistic structure. Cognitive Science, 44(8): e12876. doi:10.1111/cogs.12876.

    Abstract

    Social network structure has been argued to shape the structure of languages, as well as affect the spread of innovations and the formation of conventions in the community. Specifically, theoretical and computational models of language change predict that sparsely connected communities develop more systematic languages, while tightly knit communities can maintain high levels of linguistic complexity and variability. However, the role of social network structure in the cultural evolution of languages has never been tested experimentally. Here, we present results from a behavioral group communication study, in which we examined the formation of new languages created in the lab by micro‐societies that varied in their network structure. We contrasted three types of social networks: fully connected, small‐world, and scale‐free. We examined the artificial languages created by these different networks with respect to their linguistic structure, communicative success, stability, and convergence. Results did not reveal any effect of network structure for any measure, with all languages becoming similarly more systematic, more accurate, more stable, and more shared over time. At the same time, small‐world networks showed the greatest variation in their convergence, stabilization, and emerging structure patterns, indicating that network structure can influence the community's susceptibility to random linguistic changes (i.e., drift).
  • Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52(4), 1783-1794. doi:10.3758/s13428-020-01350-2.

    Abstract

    In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data. We provide a detailed description of how SPUDNIG detects hand movement initiation and termination, as well as open-source code and a short tutorial on an easy-to-use graphical user interface (GUI) of our tool. We then provide a proof-of-principle and validation of our method by comparing SPUDNIG’s output to manual annotations of gestures by a human coder. While the tool does not entirely eliminate the need of a human coder (e.g., for false positives detection), our results demonstrate that SPUDNIG can detect both iconic and non-iconic gestures with very high accuracy, and could successfully detect all iconic gestures in our validation dataset. Importantly, SPUDNIG’s output can directly be imported into commonly used annotation tools such as ELAN and ANVIL. We therefore believe that SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora.

    Additional information

    data and materials
  • Rodd, J., Bosker, H. R., Ernestus, M., Alday, P. M., Meyer, A. S., & Ten Bosch, L. (2020). Control of speaking rate is achieved by switching between qualitatively distinct cognitive ‘gaits’: Evidence from simulation. Psychological Review, 127(2), 281-304. doi:10.1037/rev0000172.

    Abstract

    That speakers can vary their speaking rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: When walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech “gaits” that resemble walking and running? Or is control achieved by continuous modulation of a single gait? This study investigates these possibilities through simulations of a new connectionist computational model of the cognitive process of speech production, EPONA, that borrows from Dell, Burger, and Svec’s (1997) model. The model has parameters that can be adjusted to fit the temporal characteristics of speech at different speaking rates. We trained the model on a corpus of disyllabic Dutch words produced at different speaking rates. During training, different clusters of parameter values (regimes) were identified for different speaking rates. In a 1-gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speaking rates. The parameter settings associated with each speaking rate were not linearly related, suggesting the presence of cognitive gaits. Thus, we provide the first computationally explicit account of the ability to modulate the speech production system to achieve different speaking styles.

    Additional information

    Supplemental material
  • Rojas-Berscia, L. M., Napurí, A., & Wang, L. (2020). Shawi (Chayahuita). Journal of the International Phonetic Association, 50(3), 417-430. doi:10.1017/S0025100318000415.

    Abstract

    Shawi1 is the language of the indigenous Shawi/Chayahuita people in Northwestern Amazonia, Peru. It belongs to the Kawapanan language family, together with its moribund sister language, Shiwilu. It is spoken by about 21,000 speakers (see Rojas-Berscia 2013) in the provinces of Alto Amazonas and Datem del Marañón in the region of Loreto and in the northern part of the region of San Martín, being one of the most vital languages in the country (see Figure 1).2 Although Shawi groups in the Upper Amazon were contacted by Jesuit missionaries during colonial times, the maintenance of their customs and language is striking. To date, most Shawi children are monolingual and have their first contact with Spanish at school. Yet, due to globalisation and the construction of highways by the Peruvian government, many Shawi villages are progressively westernising. This may result in the imminent loss of their indigenous culture and language.

    Additional information

    Supplementary material
  • Rossi, G. (2020). Other-repetition in conversation across languages: Bringing prosody into pragmatic typology. Language in Society, 49(4), 495-520. doi:10.1017/S0047404520000251.

    Abstract

    In this article, I introduce the aims and scope of a project examining other-repetition in natural conversation. This introduction provides the conceptual and methodological background for the five language-specific studies contained in this special issue, focussing on other-repetition in English, Finnish, French, Italian, and Swedish. Other-repetition is a recurrent conversational phenomenon in which a speaker repeats all or part of what another speaker has just said, typically in the next turn. Our project focusses particularly on other-repetitions that problematise what is being repeated and typically solicit a response. Previous research has shown that such repetitions can accomplish a range of conversational actions. But how do speakers of different languages distinguish these actions? In addressing this question, we put at centre stage the resources of prosody—the nonlexical acoustic-auditory features of speech—and bring its systematic analysis into the growing field of pragmatic typology—the comparative study of language use and conversational structure.
  • Rossi, G. (2020). The prosody of other-repetition in Italian: A system of tunes. Language in Society, 49(4), 619-652. doi:10.1017/S0047404520000627.

    Abstract

    As part of the project reported on in this special issue, the present study provides an overview of the types of action accomplished by other-repetition in Italian, with particular reference to the variety of the language spoken in the northeastern province of Trento. The analysis surveys actions within the domain of initiating repair, actions that extend beyond initiating repair, and actions that are alternative to initiating repair. Pitch contour emerges as a central design feature of other-repetition in Italian, with six nuclear contours associated with distinct types of action, sequential trajectories, and response patterns. The study also documents the interplay of pitch contour with other prosodic features (pitch span and register) and visible behavior (head nods, eyebrow movements).

    Additional information

    Sound clips.zip
  • Rowland, C. F. (2020). Introduction. In M. E. Poulsen (Ed.), The Jerome Bruner Library: From New York to Nijmegen. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Rubio-Fernández, P., & Jara-Ettinger, J. (2020). Incrementality and efficiency shape pragmatics across languages. Proceedings of the National Academy of Sciences, 117, 13399-13404. doi:10.1073/pnas.1922067117.

    Abstract

    To correctly interpret a message, people must attend to the context in which it was produced. Here we investigate how this process, known as pragmatic reasoning, is guided by two universal forces in human communication: incrementality and efficiency, with speakers of all languages interpreting language incrementally and making the most efficient use of the incoming information. Crucially, however, the interplay between these two forces results in speakers of different languages having different pragmatic information available at each point in processing, including inferences about speaker intentions. In particular, the position of adjectives relative to nouns (e.g., “black lamp” vs. “lamp black”) makes visual context information available in reverse orders. In an eye-tracking study comparing four unrelated languages that have been understudied with regard to language processing (Catalan, Hindi, Hungarian, and Wolof), we show that speakers of languages with an adjective–noun order integrate context by first identifying properties (e.g., color, material, or size), whereas speakers of languages with a noun–adjective order integrate context by first identifying kinds (e.g., lamps or chairs). Most notably, this difference allows listeners of adjective–noun descriptions to infer the speaker’s intention when using an adjective (e.g., “the black…” as implying “not the blue one”) and anticipate the target referent, whereas listeners of noun–adjective descriptions are subject to temporary ambiguity when deriving the same interpretation. We conclude that incrementality and efficiency guide pragmatic reasoning across languages, with different word orders having different pragmatic affordances.
  • Scharenborg, O., Ondel, L., Palaskar, S., Arthur, P., Ciannella, F., Du, M., Larsen, E., Merkx, D., Riad, R., Wang, L., Dupoux, E., Besacier, L., Black, A., Hasegawa-Johnson, M., Metze, F., Neubig, G., Stüker, S., Godard, P., & Müller, M. (2020). Speech technology for unwritten languages. IEEE/ACM Transactions on Audio, Speech and Language Processing, 28, 964-975. doi:10.1109/TASLP.2020.2973896.

    Abstract

    Speech technology plays an important role in our everyday life. Among others, speech is used for human-computer interaction, for instance for information retrieval and on-line shopping. In the case of an unwritten language, however, speech technology is unfortunately difficult to create, because it cannot be created by the standard combination of pre-trained speech-to-text and text-to-speech subsystems. The research presented in this article takes the first steps towards speech technology for unwritten languages. Specifically, the aim of this work was 1) to learn speech-to-meaning representations without using text as an intermediate representation, and 2) to test the sufficiency of the learned representations to regenerate speech or translated text, or to retrieve images that depict the meaning of an utterance in an unwritten language. The results suggest that building systems that go directly from speech-to-meaning and from meaning-to-speech, bypassing the need for text, is possible.
  • Schijven, D., Stevelink, R., McCormack, M., van Rheenen, W., Luykx, J. J., Koeleman, B. P., Veldink, J. H., Project MinE ALS GWAS Consortium, & International League Against Epilepsy Consortium on Complex Epilepsies (2020). Analysis of shared common genetic risk between amyotrophic lateral sclerosis and epilepsy. Neurobiology of Aging, 92, 153.e1-153.e5. doi:10.1016/j.neurobiolaging.2020.04.011.

    Abstract

    Because hyper-excitability has been shown to be a shared pathophysiological mechanism, we used the latest and largest genome-wide studies in amyotrophic lateral sclerosis (n = 36,052) and epilepsy (n = 38,349) to determine genetic overlap between these conditions. First, we showed no significant genetic correlation, also when binned on minor allele frequency. Second, we confirmed the absence of polygenic overlap using genomic risk score analysis. Finally, we did not identify pleiotropic variants in meta-analyses of the 2 diseases. Our findings indicate that amyotrophic lateral sclerosis and epilepsy do not share common genetic risk, showing that hyper-excitability in both disorders has distinct origins.

    Additional information

    1-s2.0-S0197458020301305-mmc1.docx
  • Schijven, D., Veldink, J. H., & Luykx, J. J. (2020). Genetic cross-disorder analysis in psychiatry: from methodology to clinical utility. The British Journal of Psychiatry, 216(5), 246-249. doi:10.1192/bjp.2019.72.

    Abstract

    SummaryGenome-wide association studies have uncovered hundreds of loci associated with psychiatric disorders. Cross-disorder studies are among the prime ramifications of such research. Here, we discuss the methodology of the most widespread methods and their clinical utility with regard to diagnosis, prediction, disease aetiology and treatment in psychiatry.Declaration of interestNone.
  • Schijven, D., Zinkstok, J. R., & Luykx, J. J. (2020). Van genetische bevindingen naar de klinische praktijk van de psychiater: Hoe genetica precisiepsychiatrie mogelijk kan maken. Tijdschrift voor Psychiatrie, 62(9), 776-783.
  • Schoenmakers, G.-J. (2020). Freedom in the Dutch middle-field: Deriving discourse structure at the syntax-pragmatics interface. Glossa: a journal of general linguistics, 5(1): 114. doi:10.5334/gjgl.1307.

    Abstract

    This paper experimentally explores the optionality of Dutch scrambling structures with a definite object and an adverb. Most researchers argue that such structures are not freely interchangeable, but are subject to a strict discourse template. Existing analyses are based primarily on intuitions of the researchers, while experimental support is scarce. This paper reports on two experiments to gauge the existence of a strict discourse template. The discourse status of definite objects in scrambling clauses is first probed in a fill-in-the-blanks experiment and subsequently manipulated in a speeded judgment experiment. The results of these experiments indicate that scrambling is not as restricted as is commonly claimed. Although mismatches between surface order and pragmatic interpretation lead to a penalty in judgment rates and a rise in reaction times, they nonetheless occur in production and yield fully acceptable structures. Crucially, the penalties and delays emerge only in scrambling clauses with an adverb that is sensitive to focus placement. This paper argues that scrambling does not map onto discourse structure in the strict way proposed in most literature. Instead, a more complex syntax of deriving discourse relations is proposed which submits that the Dutch scrambling pattern results from two familiar processes which apply at the syntax-pragmatics interface: reconstruction and covert raising.
  • Seijdel, N., Tsakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2020). Depth in convolutional neural networks solves scene segmentation. PLOS Computational Biology, 16: e1008022. doi:10.1371/journal.pcbi.1008022.

    Abstract

    Feed-forward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.
  • Seijdel, N., Jahfari, S., Groen, I. I. A., & Scholte, H. S. (2020). Low-level image statistics in natural scenes influence perceptual decision-making. Scientific Reports, 10: 10573. doi:10.1038/s41598-020-67661-8.

    Abstract

    A fundamental component of interacting with our environment is gathering and interpretation of sensory information. When investigating how perceptual information influences decision-making, most researchers have relied on manipulated or unnatural information as perceptual input, resulting in findings that may not generalize to real-world scenes. Unlike simplified, artificial stimuli, real-world scenes contain low-level regularities that are informative about the structural complexity, which the brain could exploit. In this study, participants performed an animal detection task on low, medium or high complexity scenes as determined by two biologically plausible natural scene statistics, contrast energy (CE) or spatial coherence (SC). In experiment 1, stimuli were sampled such that CE and SC both influenced scene complexity. Diffusion modelling showed that the speed of information processing was affected by low-level scene complexity. Experiment 2a/b refined these observations by showing how isolated manipulation of SC resulted in weaker but comparable effects, with an additional change in response boundary, whereas manipulation of only CE had no effect. Overall, performance was best for scenes with intermediate complexity. Our systematic definition quantifies how natural scene complexity interacts with decision-making. We speculate that CE and SC serve as an indication to adjust perceptual decision-making based on the complexity of the input.

    Additional information

    supplementary materials data code and data
  • Sekine, K., Schoechl, C., Mulder, K., Holler, J., Kelly, S., Furman, R., & Ozyurek, A. (2020). Evidence for children's online integration of simultaneous information from speech and iconic gestures: An ERP study. Language, Cognition and Neuroscience, 35(10), 1283-1294. doi:10.1080/23273798.2020.1737719.

    Abstract

    Children perceive iconic gestures, along with speech they hear. Previous studies have shown
    that children integrate information from both modalities. Yet it is not known whether children
    can integrate both types of information simultaneously as soon as they are available as adults
    do or processes them separately initially and integrate them later. Using electrophysiological
    measures, we examined the online neurocognitive processing of gesture-speech integration in
    6- to 7-year-old children. We focused on the N400 event-related potentials component which
    is modulated by semantic integration load. Children watched video clips of matching or
    mismatching gesture-speech combinations, which varied the semantic integration load. The
    ERPs showed that the amplitude of the N400 was larger in the mismatching condition than in
    the matching condition. This finding provides the first neural evidence that by the ages of 6
    or 7, children integrate multimodal semantic information in an online fashion comparable to
    that of adults.
  • Senft, G. (2020). “.. to grasp the native's point of view..” — A plea for a holistic documentation of the Trobriand Islanders' language, culture and cognition. Russian Journal of Linguistics, 24(1), 7-30. doi:10.22363/2687-0088-2020-24-1-7-30.

    Abstract

    In his famous introduction to his monograph “Argonauts of the Western Pacific” Bronislaw
    Malinowski (1922: 24f.) points out that a “collection of ethnographic statements, characteristic
    narratives, typical utterances, items of folk-lore and magical formulae has to be given as a corpus
    inscriptionum, as documents of native mentality”. This is one of the prerequisites to “grasp the
    native's point of view, his relation to life, to realize his vision of his world”. Malinowski managed
    to document a “Corpus Inscriptionum Agriculturae Quriviniensis” in his second volume of “Coral
    Gardens and their Magic” (1935 Vol II: 79-342). But he himself did not manage to come up with a
    holistic corpus inscriptionum for the Trobriand Islanders. One of the main aims I have been pursuing
    in my research on the Trobriand Islanders' language, culture, and cognition has been to fill this
    ethnolinguistic niche. In this essay, I report what I had to do to carry out this complex and ambitious
    project, what forms and kinds of linguistic and cultural competence I had to acquire, and how I
    planned my data collection during 16 long- and short-term field trips to the Trobriand Islands
    between 1982 and 2012. The paper ends with a critical assessment of my Trobriand endeavor.
  • Senft, G. (2020). Kampfschild - vayola. In T. Brüderlin, S. Schien, & S. Stoll (Eds.), Ausgepackt! 125Jahre Geschichte[n] im Museum Natur und Mensch (pp. 58-59). Freiburg: Michael Imhof Verlag.
  • Senft, G. (2020). 32 Kampfschild - dance or war shield - vayola. In T. Brüderlin, & S. Stoll (Eds.), Ausgepackt! 125Jahre Geschichte[n] im Museum Natur und Mensch. Texte zur Ausstellung, Städtische Museen Freiburg, vom 20. Juni 2020 bis 10. Januar 2021 (pp. 76-77). Freiburg: Städtische Museen.
  • Shao, Z., & Rommers, J. (2020). How a question context aids word production: Evidence from the picture–word interference paradigm. Quarterly Journal of Experimental Psychology, 73(2), 165-173. doi:10.1177/1747021819882911.

    Abstract

    Difficulties in saying the right word at the right time arise at least in part because multiple response candidates are simultaneously activated in the speaker’s mind. The word selection process has been simulated using the picture–word interference task, in which participants name pictures while ignoring a superimposed written distractor word. However, words are usually produced in context, in the service of achieving a communicative goal. Two experiments addressed the questions whether context influences word production, and if so, how. We embedded the picture–word interference task in a dialogue-like setting, in which participants heard a question and named a picture as an answer to the question while ignoring a superimposed distractor word. The conversational context was either constraining or nonconstraining towards the answer. Manipulating the relationship between the picture name and the distractor, we focused on two core processes of word production: retrieval of semantic representations (Experiment 1) and phonological encoding (Experiment 2). The results of both experiments showed that naming reaction times (RTs) were shorter when preceded by constraining contexts as compared with nonconstraining contexts. Critically, constraining contexts decreased the effect of semantically related distractors but not the effect of phonologically related distractors. This suggests that conversational contexts can help speakers with aspects of the meaning of to-be-produced words, but phonological encoding processes still need to be performed as usual.
  • Sharpe, V., Weber, K., & Kuperberg, G. R. (2020). Impairments in probabilistic prediction and Bayesian learning can explain reduced neural semantic priming in schizophrenia. Schizophrenia Bulletin, 46(6), 1558-1566. doi:10.1093/schbul/sbaa069.

    Abstract

    It has been proposed that abnormalities in probabilistic prediction and dynamic belief updating explain the multiple features of schizophrenia. Here, we used electroencephalography (EEG) to ask whether these abnormalities can account for the well-established reduction in semantic priming observed in schizophrenia under nonautomatic conditions. We isolated predictive contributions to the neural semantic priming effect by manipulating the prime’s predictive validity and minimizing retroactive semantic matching mechanisms. We additionally examined the link between prediction and learning using a Bayesian model that probed dynamic belief updating as participants adapted to the increase in predictive validity. We found that patients were less likely than healthy controls to use the prime to predictively facilitate semantic processing on the target, resulting in a reduced N400 effect. Moreover, the trial-by-trial output of our Bayesian computational model explained between-group differences in trial-by-trial N400 amplitudes as participants transitioned from conditions of lower to higher predictive validity. These findings suggest that, compared with healthy controls, people with schizophrenia are less able to mobilize predictive mechanisms to facilitate processing at the earliest stages of accessing the meanings of incoming words. This deficit may be linked to a failure to adapt to changes in the broader environment. This reciprocal relationship between impairments in probabilistic prediction and Bayesian learning/adaptation may drive a vicious cycle that maintains cognitive disturbances in schizophrenia.

    Additional information

    supplementary material
  • Shen, C., & Janse, E. (2020). Maximum speech performance and executive control in young adult speakers. Journal of Speech, Language, and Hearing Research, 63, 3611-3627. doi:10.1044/2020_JSLHR-19-00257.

    Abstract

    Purpose

    This study investigated whether maximum speech performance, more specifically, the ability to rapidly alternate between similar syllables during speech production, is associated with executive control abilities in a nonclinical young adult population.
    Method

    Seventy-eight young adult participants completed two speech tasks, both operationalized as maximum performance tasks, to index their articulatory control: a diadochokinetic (DDK) task with nonword and real-word syllable sequences and a tongue-twister task. Additionally, participants completed three cognitive tasks, each covering one element of executive control (a Flanker interference task to index inhibitory control, a letter–number switching task to index cognitive switching, and an operation span task to index updating of working memory). Linear mixed-effects models were fitted to investigate how well maximum speech performance measures can be predicted by elements of executive control.
    Results

    Participants' cognitive switching ability was associated with their accuracy in both the DDK and tongue-twister speech tasks. Additionally, nonword DDK accuracy was more strongly associated with executive control than real-word DDK accuracy (which has to be interpreted with caution). None of the executive control abilities related to the maximum rates at which participants performed the two speech tasks.
    Conclusion

    These results underscore the association between maximum speech performance and executive control (cognitive switching in particular).
  • Shin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C. and 49 moreShin, J., Ma, S., Hofer, E., Patel, Y., Vosberg, D. E., Tilley, S., Roshchupkin, G. V., Sousa, A. M. M., Jian, X., Gottesman, R., Mosley, T. H., Fornage, M., Saba, Y., Pirpamer, L., Schmidt, R., Schmidt, H., Carrion Castillo, A., Crivello, F., Mazoyer, B., Bis, J. C., Li, S., Yang, Q., Luciano, M., Karama, S., Lewis, L., Bastin, M. E., Harris, M. A., Wardlaw, J. M., Deary, I. E., Scholz, M., Loeffler, M., Witte, A. V., Beyer, F., Villringer, A., Armstrong, N. F., Mather, K. A., Ames, D., Jiang, J., Kwok, J. B., Schofield, P. R., Thalamuthu, A., Trollor, J. N., Wright, M. J., Brodaty, H., Wen, W., Sachdev, P. S., Terzikhan, N., Evans, T. E., Adams, H. H. H. H., Ikram, M. A., Frenzel, S., Van der Auwera-Palitschka, S., Wittfeld, K., Bülow, R., Grabe, H. J., Tzourio, C., Mishra, A., Maingault, S., Debette, S., Gillespie, N. A., Franz, C. E., Kremen, W. S., Ding, L., Jahanshad, N., the ENIGMA Consortium, Sestan, N., Pausova, Z., Seshadri, S., Paus, T., & the neuroCHARGE Working Group (2020). Global and regional development of the human cerebral cortex: Molecular acrchitecture and occupational aptitudes. Cerebral Cortex, 30(7), 4121-4139. doi:10.1093/cercor/bhaa035.

    Abstract

    We have carried out meta-analyses of genome-wide association studies (GWAS) (n = 23 784) of the first two principal components (PCs) that group together cortical regions with shared variance in their surface area. PC1 (global) captured variations of most regions, whereas PC2 (visual) was specific to the primary and secondary visual cortices. We identified a total of 18 (PC1) and 17 (PC2) independent loci, which were replicated in another 25 746 individuals. The loci of the global PC1 included those associated previously with intracranial volume and/or general cognitive function, such as MAPT and IGF2BP1. The loci of the visual PC2 included DAAM1, a key player in the planar-cell-polarity pathway. We then tested associations with occupational aptitudes and, as predicted, found that the global PC1 was associated with General Learning Ability, and the visual PC2 was associated with the Form Perception aptitude. These results suggest that interindividual variations in global and regional development of the human cerebral cortex (and its molecular architecture) cascade—albeit in a very limited manner—to behaviors as complex as the choice of one’s occupation.
  • Sjerps, M. J., Decuyper, C., & Meyer, A. S. (2020). Initiation of utterance planning in response to pre-recorded and “live” utterances. Quarterly Journal of Experimental Psychology, 73(3), 357-374. doi:10.1177/1747021819881265.

    Abstract

    In everyday conversation, interlocutors often plan their utterances while listening to their conversational partners, thereby achieving short gaps between their turns. Important issues for current psycholinguistics are how interlocutors distribute their attention between listening and speech planning and how speech planning is timed relative to listening. Laboratory studies addressing these issues have used a variety of paradigms, some of which have involved using recorded speech to which participants responded, whereas others have involved interactions with confederates. This study investigated how this variation in the speech input affected the participants’ timing of speech planning. In Experiment 1, participants responded to utterances produced by a confederate, who sat next to them and looked at the same screen. In Experiment 2, they responded to recorded utterances of the same confederate. Analyses of the participants’ speech, their eye movements, and their performance in a concurrent tapping task showed that, compared with recorded speech, the presence of the confederate increased the processing load for the participants, but did not alter their global sentence planning strategy. These results have implications for the design of psycholinguistic experiments and theories of listening and speaking in dyadic settings.
  • Slonimska, A., Ozyurek, A., & Capirci, O. (2020). The role of iconicity and simultaneity for efficient communication: The case of Italian Sign Language (LIS). Cognition, 200: 104246. doi:10.1016/j.cognition.2020.104246.

    Abstract

    A fundamental assumption about language is that, regardless of language modality, it faces the linearization problem, i.e., an event that occurs simultaneously in the world has to be split in language to be organized on a temporal scale. However, the visual modality of signed languages allows its users not only to express meaning in a linear manner but also to use iconicity and multiple articulators together to encode information simultaneously. Accordingly, in cases when it is necessary to encode informatively rich events, signers can take advantage of simultaneous encoding in order to represent information about different referents and their actions simultaneously. This in turn would lead to more iconic and direct representation. Up to now, there has been no experimental study focusing on simultaneous encoding of information in signed languages and its possible advantage for efficient communication. In the present study, we assessed how many information units can be encoded simultaneously in Italian Sign Language (LIS) and whether the amount of simultaneously encoded information varies based on the amount of information that is required to be expressed. Twenty-three deaf adults participated in a director-matcher game in which they described 30 images of events that varied in amount of information they contained. Results revealed that as the information that had to be encoded increased, signers also increased use of multiple articulators to encode different information (i.e., kinematic simultaneity) and density of simultaneously encoded information in their production. Present findings show how the fundamental properties of signed languages, i.e., iconicity and simultaneity, are used for the purpose of efficient information encoding in Italian Sign Language (LIS).

    Additional information

    Supplementary data
  • Snijders, T. M., Benders, T., & Fikkert, P. (2020). Infants segment words from songs - an EEG study. Brain Sciences, 10( 1): 39. doi:10.3390/brainsci10010039.

    Abstract

    Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.
  • Sønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V. and 133 moreSønderby, I. E., Gústafsson, Ó., Doan, N. T., Hibar, D. P., Martin-Brevet, S., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N., Blangero, J., Boomsma, D. I., Bralten, J., Brattbak, H.-R., Brodaty, H., Brouwer, R. M., Bülow, R., Calhoun, V., Caspers, S., Cavalleri, G., Chen, C.-H., Cichon, S., Ciufolini, S., Corvin, A., Crespo-Facorro, B., Curran, J. E., Dale, A. M., Dalvie, S., Dazzan, P., De Geus, E. J. C., De Zubicaray, G. I., De Zwarte, S. M. C., Delanty, N., Den Braber, A., Desrivières, S., Donohoe, G., Draganski, B., Ehrlich, S., Espeseth, T., Fisher, S. E., Franke, B., Frouin, V., Fukunaga, M., Gareau, T., Glahn, D. C., Grabe, H., Groenewold, N. A., Haavik, J., Håberg, A., Hashimoto, R., Hehir-Kwa, J. Y., Heinz, A., Hillegers, M. H. J., Hoffmann, P., Holleran, L., Hottenga, J.-J., Hulshoff, H. E., Ikeda, M., Jahanshad, N., Jernigan, T., Jockwitz, C., Johansson, S., Jonsdottir, G. A., Jönsson, E. G., Kahn, R., Kaufmann, T., Kelly, S., Kikuchi, M., Knowles, E. E. M., Kolskår, K. K., Kwok, J. B., Le Hellard, S., Leu, C., Liu, J., Lundervold, A. J., Lundervold, A., Martin, N. G., Mather, K., Mathias, S. R., McCormack, M., McMahon, K. L., McRae, A., Milaneschi, Y., Moreau, C., Morris, D., Mothersill, D., Mühleisen, T. W., Murray, R., Nordvik, J. E., Nyberg, L., Olde Loohuis, L. M., Ophoff, R., Paus, T., Pausova, Z., Penninx, B., Peralta, J. M., Pike, B., Prieto, C., Pudas, S., Quinlan, E., Quintana, D. S., Reinbold, C. S., Reis Marques, T., Reymond, A., Richard, G., Rodriguez-Herreros, B., Roiz-Santiañez, R., Rokicki, J., Rucker, J., Sachdev, P., Sanders, A.-M., Sando, S. B., Schmaal, L., Schofield, P. R., Schork, A. J., Schumann, G., Shin, J., Shumskaya, E., Sisodiya, S., Steen, V. M., Stein, D. J., Steinberg, S., Strike, L., Teumer, A., Thalamuthu, A., Tordesillas-Gutierrez, D., Turner, J., Ueland, T., Uhlmann, A., Ulfarsson, M. O., Van 't Ent, D., Van der Meer, D., Van Haren, N. E. M., Vaskinn, A., Vassos, E., Walters, G. B., Wang, Y., Wen, W., Whelan, C. D., Wittfeld, K., Wright, M., Yamamori, H., Zayats, T., Agartz, I., Westlye, L. T., Jacquemont, S., Djurovic, S., Stefansson, H., Stefansson, K., Thompson, P., & Andreassen, O. A. (2020). Dose response of the 16p11.2 distal copy number variant on intracranial volume and basal ganglia. Molecular Psychiatry, 25, 584-602. doi:10.1038/s41380-018-0118-1.

    Abstract

    Carriers of large recurrent copy number variants (CNVs) have a higher risk of developing neurodevelopmental disorders. The 16p11.2 distal CNV predisposes carriers to e.g., autism spectrum disorder and schizophrenia. We compared subcortical brain volumes of 12 16p11.2 distal deletion and 12 duplication carriers to 6882 non-carriers from the large-scale brain Magnetic Resonance Imaging collaboration, ENIGMA-CNV. After stringent CNV calling procedures, and standardized FreeSurfer image analysis, we found negative dose-response associations with copy number on intracranial volume and on regional caudate, pallidum and putamen volumes (β = −0.71 to −1.37; P < 0.0005). In an independent sample, consistent results were obtained, with significant effects in the pallidum (β = −0.95, P = 0.0042). The two data sets combined showed significant negative dose-response for the accumbens, caudate, pallidum, putamen and ICV (P = 0.0032, 8.9 × 10−6, 1.7 × 10−9, 3.5 × 10−12 and 1.0 × 10−4, respectively). Full scale IQ was lower in both deletion and duplication carriers compared to non-carriers. This is the first brain MRI study of the impact of the 16p11.2 distal CNV, and we demonstrate a specific effect on subcortical brain structures, suggesting a neuropathological pattern underlying the neurodevelopmental syndromes
  • Speed, L. J., & Majid, A. (2020). Grounding language in the neglected senses of touch, taste, and smell. Cognitive Neuropsychology, 37(5-6), 363-392. doi:10.1080/02643294.2019.1623188.

    Abstract

    Grounded theories hold sensorimotor activation is critical to language processing. Such theories have focused predominantly on the dominant senses of sight and hearing. Relatively fewer studies have assessed mental simulation within touch, taste, and smell, even though they are critically implicated in communication for important domains, such as health and wellbeing. We review work that sheds light on whether perceptual activation from lesser studied modalities contribute to meaning in language. We critically evaluate data from behavioural, imaging, and cross-cultural studies. We conclude that evidence for sensorimotor simulation in touch, taste, and smell is weak. Comprehending language related to these senses may instead rely on simulation of emotion, as well as crossmodal simulation of the “higher” senses of vision and audition. Overall, the data suggest the need for a refinement of embodiment theories, as not all sensory modalities provide equally strong evidence for mental simulation.
  • Sumer, B., & Ozyurek, A. (2020). No effects of modality in development of locative expressions of space in signing and speaking children. Journal of Child Language, 47(6), 1101-1131. doi:10.1017/S0305000919000928.

    Abstract

    Linguistic expressions of locative spatial relations in sign languages are mostly visually- motivated representations of space involving mapping of entities and spatial relations between them onto the hands and the signing space. These are also morphologically complex forms. It is debated whether modality-specific aspects of spatial expressions modulate spatial language development differently in signing compared to speaking children. In a picture description task, we compared the use of locative expressions for containment, support and occlusion relations by deaf children acquiring Turkish Sign Language and hearing children acquiring Turkish (3;5-9;11 years). Unlike previous reports suggesting a boosting effect of iconicity, and / or a hindering effect of morphological complexity of the locative forms in sign languages, our results show similar developmental patterns for signing and speaking children's acquisition of these forms. Our results suggest the primacy of cognitive development guiding the acquisition of locative expressions by speaking and signing children.
  • Takashima, A., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2020). Speaking in the brain: The interaction between words and syntax in sentence production. Journal of Cognitive Neuroscience, 32(8), 1466-1483. doi:10.1162/jocn_a_01563.

    Abstract

    This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences that differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudoverbs). The experiment consisted of 30 mini-blocks of six sentences each. Each mini-block started with an example for the type of sentence to be produced in that block. On each trial in the mini-blocks, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to sentences with pseudoverbs in the core language network of the left inferior frontal gyrus, the left posterior middle temporalgyrus, and a more posterior middle temporal region extending into the angular gyrus, analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left posterior middle temporal gyrus/angular gyrus area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs leads to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical–syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single-word production, comprehension, and language processing in aphasia.
  • Tan, Y., & Hagoort, P. (2020). Catecholaminergic modulation of semantic processing in sentence comprehension. Cerebral Cortex, 30(12), 6426-6443. doi:10.1093/cercor/bhaa204.

    Abstract

    Catecholamine (CA) function has been widely implicated in cognitive functions that are tied to the prefrontal cortex and striatal areas. The present study investigated the effects of methylphenidate, which is a CA agonist, on the electroencephalogram (EEG) response related to semantic processing using a double-blind, placebo-controlled, randomized, crossover, within-subject design. Forty-eight healthy participants read semantically congruent or incongruent sentences after receiving 20-mg methylphenidate or a placebo while their brain activity was monitored with EEG. To probe whether the catecholaminergic modulation is task-dependent, in one condition participants had to focus on comprehending the sentences, while in the other condition, they only had to attend to the font size of the sentence. The results demonstrate that methylphenidate has a task-dependent effect on semantic processing. Compared to placebo, when semantic processing was task-irrelevant, methylphenidate enhanced the detection of semantic incongruence as indexed by a larger N400 amplitude in the incongruent sentences; when semantic processing was task-relevant, methylphenidate induced a larger N400 amplitude in the semantically congruent condition, which was followed by a larger late positive complex effect. These results suggest that CA-related neurotransmitters influence language processing, possibly through the projections between the prefrontal cortex and the striatum, which contain many CA receptors.
  • Ten Oever, S., Meierdierks, T., Duecker, F., De Graaf, T., & Sack, A. (2020). Phase-coded oscillatory ordering promotes the separation of closely matched representations to optimize perceptual discrimination. iScience, 23(7): 101282. doi:10.1016/j.isci.2020.101282.

    Abstract

    Low-frequency oscillations are proposed to be involved in separating neuronal representations belonging to different items. Although item-specific neuronal activity was found to cluster on different oscillatory phases, the influence of this mechanism on perception is unknown. Here, we investigated the perceptual consequences of neuronal item separation through oscillatory clustering. In an electroencephalographic experiment, participants categorized sounds parametrically varying in pitch, relative to an arbitrary pitch boundary. Pre-stimulus theta and alpha phase biased near-boundary sound categorization to one category or the other. Phase also modulated whether evoked neuronal responses contributed stronger to the fit of the sound envelope of one or another category. Intriguingly, participants with stronger oscillatory clustering (phase strongly biasing sound categorization) in the theta, but not alpha, range had steeper perceptual psychometric slopes (sharper sound category discrimination). These results indicate that neuronal sorting by phase directly influences subsequent perception and has a positive impact on discrimination performance

    Additional information

    Supplemental Information
  • Ten Oever, S., De Weerd, P., & Sack, A. T. (2020). Phase-dependent amplification of working memory content and performance. Nature Communications, 11: 1832. doi:10.1038/s41467-020-15629-7.

    Abstract

    Successful working memory performance has been related to oscillatory mechanisms operating in low-frequency ranges. Yet, their mechanistic interaction with the distributed neural activity patterns representing the content of the memorized information remains unclear. Here, we record EEG during a working memory retention interval, while a task-irrelevant, high-intensity visual impulse stimulus is presented to boost the read-out of distributed neural activity related to the content held in working memory. Decoding of this activity with a linear classifier reveals significant modulations of classification accuracy by oscillatory phase in the theta/alpha ranges at the moment of impulse presentation. Additionally, behavioral accuracy is highest at the phases showing maximized decoding accuracy. At those phases, behavioral accuracy is higher in trials with the impulse compared to no-impulse trials. This constitutes the first evidence in humans that working memory information is maximized within limited phase ranges, and that phase-selective, sensory impulse stimulation can improve working memory.
  • Teng, X., Ma, M., Yang, J., Blohm, S., Cai, Q., & Tian, X. (2020). Constrained structure of ancient Chinese poetry facilitates speech content grouping. Current Biology, 30, 1299-1305. doi:10.1016/j.cub.2020.01.059.

    Abstract

    Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage [1, 2]; its poetic genres impose unique combinatory constraints on linguistic elements [3]. How does the constrained poetic structure facilitate speech segmentation when common linguistic [4, 5, 6, 7, 8] and statistical cues [5, 9] are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language.

    Additional information

    Supplemental Information
  • Ter Hark, S. E., Jamain, S., Schijven, D., Lin, B. D., Bakker, M. K., Boland-Auge, A., Deleuze, J.-F., Troudet, R., Malhotra, A. K., Gülöksüz, S., Vinkers, C. H., Ebdrup, B. H., Kahn, R. S., Leboyer, M., & Luykx, J. J. (2020). A new genetic locus for antipsychotic-induced weight gain: A genome-wide study of first-episode psychosis patients using amisulpride (from the OPTiMiSE cohort). Journal of Psychopharmacology, 34(5), 524-531. doi:10.1177/0269881120907972.

    Abstract

    Background:Antipsychotic-induced weight gain is a common and debilitating side effect of antipsychotics. Although genome-wide association studies of antipsychotic-induced weight gain have been performed, few genome-wide loci have been discovered. Moreover, these genome-wide association studies have included a wide variety of antipsychotic compounds.Aims:We aim to gain more insight in the genomic loci affecting antipsychotic-induced weight gain. Given the variable pharmacological properties of antipsychotics, we hypothesized that targeting a single antipsychotic compound would provide new clues about genomic loci affecting antipsychotic-induced weight gain.Methods:All subjects included for this genome-wide association study (n=339) were first-episode schizophrenia spectrum disorder patients treated with amisulpride and were minimally medicated (defined as antipsychotic use <2?weeks in the previous year and/or <6?weeks lifetime). Weight gain was defined as the increase in body mass index from before until approximately 1 month after amisulpride treatment.Results:Our genome-wide association analyses for antipsychotic-induced weight gain yielded one genome-wide significant hit (rs78310016; ?=1.05; p=3.66 ? 10?08; n=206) in a locus not previously associated with antipsychotic-induced weight gain or body mass index. Minor allele carriers had an odds ratio of 3.98 (p=1.0 ? 10?03) for clinically meaningful antipsychotic-induced weight gain (?7% of baseline weight). In silico analysis elucidated a chromatin interaction with 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1. In an attempt to replicate single-nucleotide polymorphisms previously associated with antipsychotic-induced weight gain, we found none were associated with amisulpride-induced weight gain.Conclusion:Our findings suggest the involvement of rs78310016 and possibly 3-Hydroxy-3-Methylglutaryl-CoA Synthase 1 in antipsychotic-induced weight gain. In line with the unique binding profile of this atypical antipsychotic, our findings furthermore hint that biological mechanisms underlying amisulpride-induced weight gain differ from antipsychotic-induced weight gain by other atypical antipsychotics.
  • Terband, H., Rodd, J., & Maas, E. (2020). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. International Journal of Speech-Language Pathology, 22(4), 475-486. doi:10.1080/17549507.2019.1669711.

    Abstract

    Purpose: A recent behavioural experiment featuring a noise masking paradigm suggests that Apraxia of Speech (AOS) reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts. The present study set out to validate the interpretation of AOS as a possible feedforward impairment using computational neural modelling with the DIVA (Directions Into Velocities of Articulators) model.

    Method: In a series of computational simulations with the DIVA model featuring a noise-masking paradigm mimicking the behavioural experiment, we investigated the effect of a feedforward, feedback, feedforward + feedback, and an upper motor neuron dysarthria impairment on average vowel spacing and dispersion in the production of six/bVt/speech targets.

    Result: The simulation results indicate that the output of the model with the simulated feedforward deficit resembled the group findings for the human speakers with AOS best.

    Conclusion: These results provide support to the interpretation of the human observations, corroborating the notion that AOS can be conceptualised as a deficit in feedforward control.
  • Thompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A. and 151 moreThompson, P. M., Jahanshad, N., Ching, C. R. K., Salminen, L. E., Thomopoulos, S. I., Bright, J., Baune, B. T., Bertolín, S., Bralten, J., Bruin, W. B., Bülow, R., Chen, J., Chye, Y., Dannlowski, U., De Kovel, C. G. F., Donohoe, G., Eyler, L. T., Faraone, S. V., Favre, P., Filippi, C. A., Frodl, T., Garijo, D., Gil, Y., Grabe, H. J., Grasby, K. L., Hajek, T., Han, L. K. M., Hatton, S. N., Hilbert, K., Ho, T. C., Holleran, L., Homuth, G., Hosten, N., Houenou, J., Ivanov, I., Jia, T., Kelly, S., Klein, M., Kwon, J. S., Laansma, M. A., Leerssen, J., Lueken, U., Nunes, A., O'Neill, J., Opel, N., Piras, F., Piras, F., Postema, M., Pozzi, E., Shatokhina, N., Soriano-Mas, C., Spalletta, G., Sun, D., Teumer, A., Tilot, A. K., Tozzi, L., Van der Merwe, C., Van Someren, E. J. W., Van Wingen, G. A., Völzke, H., Walton, E., Wang, L., Winkler, A. M., Wittfeld, K., Wright, M. J., Yun, J.-Y., Zhang, G., Zhang-James, Y., Adhikari, B. M., Agartz, I., Aghajani, M., Aleman, A., Althoff, R. R., Altmann, A., Andreassen, O. A., Baron, D. A., Bartnik-Olson, B. L., Bas-Hoogendam, J. M., Baskin-Sommers, A. R., Bearden, C. E., Berner, L. A., Boedhoe, P. S. W., Brouwer, R. M., Buitelaar, J. K., Caeyenberghs, K., Cecil, C. A. M., Cohen, R. A., Cole, J. H., Conrod, P. J., De Brito, S. A., De Zwarte, S. M. C., Dennis, E. L., Desrivieres, S., Dima, D., Ehrlich, S., Esopenko, C., Fairchild, G., Fisher, S. E., Fouche, J.-P., Francks, C., Frangou, S., Franke, B., Garavan, H. P., Glahn, D. C., Groenewold, N. A., Gurholt, T. P., Gutman, B. A., Hahn, T., Harding, I. H., Hernaus, D., Hibar, D. P., Hillary, F. G., Hoogman, M., Hulshoff Pol, H. E., Jalbrzikowski, M., Karkashadze, G. A., Klapwijk, E. T., Knickmeyer, R. C., Kochunov, P., Koerte, I. K., Kong, X., Liew, S.-L., Lin, A. P., Logue, M. W., Luders, E., Macciardi, F., Mackey, S., Mayer, A. R., McDonald, C. R., McMahon, A. B., Medland, S. E., Modinos, G., Morey, R. A., Mueller, S. C., Mukherjee, P., Namazova-Baranova, L., Nir, T. M., Olsen, A., Paschou, P., Pine, D. S., Pizzagalli, F., Rentería, M. E., Rohrer, J. D., Sämann, P. G., Schmaal, L., Schumann, G., Shiroishi, M. S., Sisodiya, S. M., Smit, D. J. A., Sønderby, I. E., Stein, D. J., Stein, J. L., Tahmasian, M., Tate, D. F., Turner, J. A., Van den Heuvel, O. A., Van der Wee, N. J. A., Van der Werf, Y. D., Van Erp, T. G. M., Van Haren, N. E. M., Van Rooij, D., Van Velzen, L. S., Veer, I. M., Veltman, D. J., Villalon-Reina, J. E., Walter, H., Whelan, C. D., Wilde, E. A., Zarei, M., Zelman, V., & Enigma Consortium (2020). ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries. Translational Psychiatry, 10(1): 100. doi:10.1038/s41398-020-0705-1.

    Abstract

    This review summarizes the last decade of work by the ENIGMA (Enhancing NeuroImaging Genetics through Meta Analysis) Consortium, a global alliance of over 1400 scientists across 43 countries, studying the human brain in health and disease. Building on large-scale genetic studies that discovered the first robustly replicated genetic loci associated with brain metrics, ENIGMA has diversified into over 50 working groups (WGs), pooling worldwide data and expertise to answer fundamental questions in neuroscience, psychiatry, neurology, and genetics. Most ENIGMA WGs focus on specific psychiatric and neurological conditions, other WGs study normal variation due to sex and gender differences, or development and aging; still other WGs develop methodological pipelines and tools to facilitate harmonized analyses of “big data” (i.e., genetic and epigenetic data, multimodal MRI, and electroencephalography data). These international efforts have yielded the largest neuroimaging studies to date in schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder, substance use disorders, obsessive-compulsive disorder, attention-deficit/hyperactivity disorder, autism spectrum disorders, epilepsy, and 22q11.2 deletion syndrome. More recent ENIGMA WGs have formed to study anxiety disorders, suicidal thoughts and behavior, sleep and insomnia, eating disorders, irritability, brain injury, antisocial personality and conduct disorder, and dissociative identity disorder. Here, we summarize the first decade of ENIGMA’s activities and ongoing projects, and describe the successes and challenges encountered along the way. We highlight the advantages of collaborative large-scale coordinated data analyses for testing reproducibility and robustness of findings, offering the opportunity to identify brain systems involved in clinical syndromes across diverse samples and associated genetic, environmental, demographic, cognitive, and psychosocial factors.

    Additional information

    41398_2020_705_MOESM1_ESM.pdf
  • Thompson, P. A., Bishop, D. V. M., Eising, E., Fisher, S. E., & Newbury, D. F. (2020). Generalized Structured Component Analysis in candidate gene association studies: Applications and limitations [version 2; peer review: 3 approved]. Wellcome Open Research, 4: 142. doi:10.12688/wellcomeopenres.15396.2.

    Abstract

    Background: Generalized Structured Component Analysis (GSCA) is a component-based alternative to traditional covariance-based structural equation modelling. This method has previously been applied to test for association between candidate genes and clinical phenotypes, contrasting with traditional genetic association analyses that adopt univariate testing of many individual single nucleotide polymorphisms (SNPs) with correction for multiple testing.
    Methods: We first evaluate the ability of the GSCA method to replicate two previous findings from a genetics association study of developmental language disorders. We then present the results of a simulation study to test the validity of the GSCA method under more restrictive data conditions, using smaller sample sizes and larger numbers of SNPs than have previously been investigated. Finally, we compare GSCA performance against univariate association analysis conducted using PLINK v1.9.
    Results: Results from simulations show that power to detect effects depends not just on sample size, but also on the ratio of SNPs with effect to number of SNPs tested within a gene. Inclusion of many SNPs in a model dilutes true effects.
    Conclusions: We propose that GSCA is a useful method for replication studies, when candidate SNPs have been identified, but should not be used for exploratory analysis.

    Additional information

    data via OSF
  • Todorova, L., & Neville, D. A. (2020). Associative and identity words promote the speed of visual categorization: A hierarchical drift diffusion account. Frontiers in Psychology, 11: 955. doi:10.3389/fpsyg.2020.00955.

    Abstract

    Words can either boost or hinder the processing of visual information, which can lead to facilitation or interference of the behavioral response. We investigated the stage (response execution or target processing) of verbal interference/facilitation in the response priming paradigm with a gender categorization task. Participants in our study were asked to judge whether the presented stimulus was a female or male face that was briefly preceded by a gender word either congruent (prime: “man,” target: “man”), incongruent (prime: “woman,” target: “man”) or neutral (prime: “day,” target: “man”) with respect to the face stimulus. We investigated whether related word-picture pairs resulted in faster reaction times in comparison to the neutral word-picture pairs (facilitation) and whether unrelated word-picture pairs resulted in slower reaction times in comparison to neutral word-picture pairs (interference). We further examined whether these effects (if any) map onto response conflict or aspects of target processing. In addition, identity (“man,” “woman”) and associative (“tie,” “dress”) primes were introduced to investigate the cognitive mechanisms of semantic and Stroop-like effects in response priming (introduced respectively by associations and identity words). We analyzed responses and reaction times using the drift diffusion model to examine the effect of facilitation and/or interference as a function of the prime type. We found that regardless of prime type words introduce a facilitatory effect, which maps to the processes of visual attention and response execution.
  • Todorova, L., Neville, D. A., & Piai, V. (2020). Lexical-semantic and executive deficits revealed by computational modelling: A drift diffusion model perspective. Neuropsychologia, 146: 107560. doi:10.1016/j.neuropsychologia.2020.107560.

    Abstract

    Flexible language use requires coordinated functioning of two systems: conceptual representations and control. The interaction between the two systems can be observed when people are asked to match a word to a picture. Participants are slower and less accurate for related word-picture pairs (word: banana, picture: apple) relative to unrelated pairs (word: banjo, picture: apple). The mechanism underlying interference however is still unclear. We analyzed word-picture matching (WPM) performance of patients with stroke-induced lesions to the left-temporal (N = 5) or left-frontal cortex (N = 5) and matched controls (N = 12) using the drift diffusion model (DDM). In DDM, the process of making a decision is described as the stochastic accumulation of evidence towards a response. The parameters of the DDM model that characterize this process are decision threshold, drift rate, starting point and non-decision time, each of which bears cognitive interpretability. We compared the estimated model parameters from controls and patients to investigate the mechanisms of WPM interference. WPM performance in controls was explained by the amount of information needed to make a decision (decision threshold): a higher threshold was associated with related word-picture pairs relative to unrelated ones. No difference was found in the quality of the evidence (drift rate). This suggests an executive rather than semantic mechanism underlying WPM interference. Both patients with temporal and frontal lesions exhibited both increased drift rate and decision threshold for unrelated pairs relative to related ones. Left-frontal and temporal damage affected the computations required by WPM similarly, resulting in systematic deficits across lexical-semantic memory and executive functions. These results support a diverse but interactive role of lexical-semantic memory and semantic control mechanisms.

    Additional information

    supplementary material
  • Trujillo, J. P., Simanova, I., Bekkering, H., & Ozyurek, A. (2020). The communicative advantage: How kinematic signaling supports semantic comprehension. Psychological Research, 84, 1897-1911. doi:10.1007/s00426-019-01198-y.

    Abstract

    Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees’ comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor’s faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.

    Additional information

    Supplementary material
  • Trujillo, J. P., Simanova, I., Ozyurek, A., & Bekkering, H. (2020). Seeing the unexpected: How brains read communicative intent through kinematics. Cerebral Cortex, 30(3), 1056-1067. doi:10.1093/cercor/bhz148.

    Abstract

    Social interaction requires us to recognize subtle cues in behavior, such as kinematic differences in actions and gestures produced with different social intentions. Neuroscientific studies indicate that the putative mirror neuron system (pMNS) in the premotor cortex and mentalizing system (MS) in the medial prefrontal cortex support inferences about contextually unusual actions. However, little is known regarding the brain dynamics of these systems when viewing communicatively exaggerated kinematics. In an event-related functional magnetic resonance imaging experiment, 28 participants viewed stick-light videos of pantomime gestures, recorded in a previous study, which contained varying degrees of communicative exaggeration. Participants made either social or nonsocial classifications of the videos. Using participant responses and pantomime kinematics, we modeled the probability of each video being classified as communicative. Interregion connectivity and activity were modulated by kinematic exaggeration, depending on the task. In the Social Task, communicativeness of the gesture increased activation of several pMNS and MS regions and modulated top-down coupling from the MS to the pMNS, but engagement of the pMNS and MS was not found in the nonsocial task. Our results suggest that expectation violations can be a key cue for inferring communicative intention, extending previous findings from wholly unexpected actions to more subtle social signaling.
  • Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing publication bias in Meta-Analysis: Empirical findings from community-augmented meta-analyses of infant language development. Zeitschrift für Psychologie, 228(1), 50-61. doi:10.1027/2151-2604/a000393.

    Abstract

    Meta-analyses are an indispensable research synthesis tool for characterizing bodies of literature and advancing theories. One important open question concerns the inclusion of unpublished data into meta-analyses. Finding such studies can be effortful, but their exclusion potentially leads to consequential biases like overestimation of a literature’s mean effect. We address two questions about unpublished data using MetaLab, a collection of community-augmented meta-analyses focused on developmental psychology. First, we assess to what extent MetaLab datasets include gray literature, and by what search strategies they are unearthed. We find that an average of 11% of datapoints are from unpublished literature; standard search strategies like database searches, complemented with individualized approaches like including authors’ own data, contribute the majority of this literature. Second, we analyze the effect of including versus excluding unpublished literature on estimates of effect size and publication bias, and find this decision does not affect outcomes. We discuss lessons learned and implications.

    Additional information

    Link to Dataset on PsychArchives
  • Tulling, M., Law, R., Cournane, A., & Pylkkänen, L. (2020). Neural correlates of modal displacement and discourse-updating under (un)certainty. eNeuro, 8(1): 0290-20.2020. doi:10.1523/ENEURO.0290-20.2020.

    Abstract

    A hallmark of human thought is the ability to think about not just the actual world, but also about alternative ways the world could be. One way to study this contrast is through language. Language has grammatical devices for expressing possibilities and necessities, such as the words might or must. With these devices, called “modal expressions,” we can study the actual vs. possible contrast in a highly controlled way. While factual utterances such as “There is a monster under my bed” update the here-and-now of a discourse model, a modal version of this sentence, “There might be a monster under my bed,” displaces from the here-and-now and merely postulates a possibility. We used magnetoencephalography (MEG) to test whether the processes of discourse updating and modal displacement dissociate in the brain. Factual and modal utterances were embedded in short narratives, and across two experiments, factual expressions increased the measured activity over modal expressions. However, the localization of the increase appeared to depend on perspective: signal localizing in right temporo-parietal areas increased when updating others’ beliefs, while frontal medial areas seem sensitive to updating one’s own beliefs. The presence of modal displacement did not elevate MEG signal strength in any of our analyses. In sum, this study identifies potential neural signatures of the process by which facts get added to our mental representation of the world.Competing Interest StatementThe authors have declared no competing interest.

    Additional information

    Link to Preprint on BioRxiv
  • Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Interleaved lexical and audiovisual information can retune phoneme boundaries. Attention, Perception & Psychophysics, 82, 2018-2026. doi:10.3758/s13414-019-01961-8.

    Abstract

    To adapt to situations in which speech perception is difficult, listeners can adjust boundaries between phoneme categories using perceptual learning. Such adjustments can draw on lexical information in surrounding speech, or on visual cues via speech-reading. In the present study, listeners proved they were able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words and audio recordings of Dutch words were presented in alternating blocks of either stimulus type. Listeners were able to switch between cues to adjust phoneme boundaries, and resulting effects were comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger effects, commensurate with their applicability for adapting to noisy environments. Lexical cues were able to induce effects with fewer exposure stimuli and a changing phoneme bias, in a design unlike most prior studies of lexical retuning. While lexical retuning effects were relatively weaker compared to audiovisual recalibration, this discrepancy could reflect how lexical retuning may be more suitable for adapting to speakers than to environments. Nonetheless, the presence of the lexical retuning effects suggests that it may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning.
  • Ullas, S., Formisano, E., Eisner, F., & Cutler, A. (2020). Audiovisual and lexical cues do not additively enhance perceptual adaptation. Psychonomic Bulletin & Review, 27, 707-715. doi:10.3758/s13423-020-01728-5.

    Abstract

    When listeners experience difficulty in understanding a speaker, lexical and audiovisual (or lipreading) information can be a helpful source of guidance. These two types of information embedded in speech can also guide perceptual adjustment, also
    known as recalibration or perceptual retuning. With retuning or recalibration, listeners can use these contextual cues to temporarily or permanently reconfigure internal representations of phoneme categories to adjust to and understand novel interlocutors more easily. These two types of perceptual learning, previously investigated in large part separately, are highly similar in allowing listeners to use speech-external information to make phoneme boundary adjustments. This study explored whether the two sources may work in conjunction to induce adaptation, thus emulating real life, in which listeners are indeed likely to encounter both types of cue together. Listeners who received combined audiovisual and lexical cues showed perceptual learning effects
    similar to listeners who only received audiovisual cues, while listeners who received only lexical cues showed weaker effects compared with the two other groups. The combination of cues did not lead to additive retuning or recalibration effects, suggesting that lexical and audiovisual cues operate differently with regard to how listeners use them for reshaping perceptual categories.
    Reaction times did not significantly differ across the three conditions, so none of the forms of adjustment were either aided or
    hindered by processing time differences. Mechanisms underlying these forms of perceptual learning may diverge in numerous ways despite similarities in experimental applications.

    Additional information

    Data and materials
  • Ullas, S., Hausfeld, L., Cutler, A., Eisner, F., & Formisano, E. (2020). Neural correlates of phonetic adaptation as induced by lexical and audiovisual context. Journal of Cognitive Neuroscience, 32(11), 2145-2158. doi:10.1162/jocn_a_01608.

    Abstract

    When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.
  • Ünal, E., & Papafragou, A. (2020). Relations between language and cognition: Evidentiality and sources of knowledge. Topics in Cognitive Science, 12(1), 115-135. doi:10.1111/tops.12355.

    Abstract

    Understanding and acquiring language involve mapping language onto conceptual representations. Nevertheless, several issues remain unresolved with respect to (a) how such mappings are performed, and (b) whether conceptual representations are susceptible to cross‐linguistic influences. In this article, we discuss these issues focusing on the domain of evidentiality and sources of knowledge. Empirical evidence in this domain yields growing support for the proposal that linguistic categories of evidentiality are tightly linked to, build on, and reflect conceptual representations of sources of knowledge that are shared across speakers of different languages.
  • Urbanus, B. H. A., Peter, S., Fisher, S. E., & De Zeeuw, C. I. (2020). Region-specific Foxp2 deletions in cortex, striatum or cerebellum cannot explain vocalization deficits observed in spontaneous global knockouts. Scientific Reports, 10: 21631. doi:10.1038/s41598-020-78531-8.

    Abstract

    FOXP2 has been identified as a gene related to speech in humans, based on rare mutations that yield significant impairments in speech at the level of both motor performance and language comprehension. Disruptions of the murine orthologue Foxp2 in mouse pups have been shown to interfere with production of ultrasonic vocalizations (USVs). However, it remains unclear which structures are responsible for these deficits. Here, we show that conditional knockout mice with selective Foxp2 deletions targeting the cerebral cortex, striatum or cerebellum, three key sites of motor control with robust neural gene expression, do not recapture the profile of pup USV deficits observed in mice with global disruptions of this gene. Moreover, we observed that global Foxp2 knockout pups show substantive reductions in USV production as well as an overproduction of short broadband noise “clicks”, which was not present in the brain region-specific knockouts. These data indicate that deficits of Foxp2 expression in the cortex, striatum or cerebellum cannot solely explain the disrupted vocalization behaviours in global Foxp2 knockouts. Our findings raise the possibility that the impact of Foxp2 disruption on USV is mediated at least in part by effects of this gene on the anatomical prerequisites for vocalizing.
  • Van der Meer, D., Sønderby, I. E., Kaufmann, T., Walters, G. B., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N. B., Blangero, J., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Bülow, R., Cahn, W., Calhoun, V. D., Caspers, S., Cavalleri, G. L. and 112 moreVan der Meer, D., Sønderby, I. E., Kaufmann, T., Walters, G. B., Abdellaoui, A., Ames, D., Amunts, K., Andersson, M., Armstrong, N. J., Bernard, M., Blackburn, N. B., Blangero, J., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Bülow, R., Cahn, W., Calhoun, V. D., Caspers, S., Cavalleri, G. L., Ching, C. R. K., Cichon, S., Ciufolini, S., Corvin, A., Crespo-Facorro, B., Curran, J. E., Dalvie, S., Dazzan, P., De Geus, E. J. C., De Zubicaray, G. I., De Zwarte, S. M. C., Delanty, N., Den Braber, A., Desrivieres, S., Di Forti, M., Doherty, J. L., Donohoe, G., Ehrlich, S., Eising, E., Espeseth, T., Fisher, S. E., Fladby, T., Frei, O., Frouin, V., Fukunaga, M., Gareau, T., Glahn, D. C., Grabe, H. J., Groenewold, N. A., Gústafsson, Ó., Haavik, J., Haberg, A. K., Hashimoto, R., Hehir-Kwa, J. Y., Hibar, D. P., Hillegers, M. H. J., Hoffmann, P., Holleran, L., Hottenga, J.-J., Hulshoff Pol, H. E., Ikeda, M., Jacquemont, S., Jahanshad, N., Jockwitz, C., Johansson, S., Jönsson, E. G., Kikuchi, M., Knowles, E. E. M., Kwok, J. B., Le Hellard, S., Linden, D. E. J., Liu, J., Lundervold, A., Lundervold, A. J., Martin, N. G., Mather, K. A., Mathias, S. R., McMahon, K. L., McRae, A. F., Medland, S. E., Moberget, T., Moreau, C., Morris, D. W., Mühleisen, T. W., Murray, R. M., Nordvik, J. E., Nyberg, L., Olde Loohuis, L. M., Ophoff, R. A., Owen, M. J., Paus, T., Pausova, Z., Peralta, J. M., Pike, B., Prieto, C., Quinlan, E. B., Reinbold, C. S., Reis Marques, T., Rucker, J. J. H., Sachdev, P. S., Sando, S. B., Schofield, P. R., Schork, A. J., Schumann, G., Shin, J., Shumskaya, E., Silva, A. I., Sisodiya, S. M., Steen, V. M., Stein, D. J., Strike, L. T., Tamnes, C. K., Teumer, A., Thalamuthu, A., Tordesillas-Gutiérrez, D., Uhlmann, A., Úlfarsson, M. Ö., Van 't Ent, D., Van den Bree, M. B. M., Vassos, E., Wen, W., Wittfeld, K., Wright, M. J., Zayats, T., Dale, A. M., Djurovic, S., Agartz, I., Westlye, L. T., Stefánsson, H., Stefánsson, K., Thompson, P. M., & Andreassen, O. A. (2020). Association of copy number variation of the 15q11.2 BP1-BP2 region with cortical and subcortical morphology and cognition. JAMA Psychiatry, 77(4), 420-430. doi:10.1001/jamapsychiatry.2019.3779.

    Abstract

    Importance Recurrent microdeletions and duplications in the genomic region 15q11.2 between breakpoints 1 (BP1) and 2 (BP2) are associated with neurodevelopmental disorders. These structural variants are present in 0.5% to 1.0% of the population, making 15q11.2 BP1-BP2 the site of the most prevalent known pathogenic copy number variation (CNV). It is unknown to what extent this CNV influences brain structure and affects cognitive abilities.

    Objective To determine the association of the 15q11.2 BP1-BP2 deletion and duplication CNVs with cortical and subcortical brain morphology and cognitive task performance.

    Design, Setting, and Participants In this genetic association study, T1-weighted brain magnetic resonance imaging were combined with genetic data from the ENIGMA-CNV consortium and the UK Biobank, with a replication cohort from Iceland. In total, 203 deletion carriers, 45 247 noncarriers, and 306 duplication carriers were included. Data were collected from August 2015 to April 2019, and data were analyzed from September 2018 to September 2019.

    Main Outcomes and Measures The associations of the CNV with global and regional measures of surface area and cortical thickness as well as subcortical volumes were investigated, correcting for age, age2, sex, scanner, and intracranial volume. Additionally, measures of cognitive ability were analyzed in the full UK Biobank cohort.

    Results Of 45 756 included individuals, the mean (SD) age was 55.8 (18.3) years, and 23 754 (51.9%) were female. Compared with noncarriers, deletion carriers had a lower surface area (Cohen d = −0.41; SE, 0.08; P = 4.9 × 10−8), thicker cortex (Cohen d = 0.36; SE, 0.07; P = 1.3 × 10−7), and a smaller nucleus accumbens (Cohen d = −0.27; SE, 0.07; P = 7.3 × 10−5). There was also a significant negative dose response on cortical thickness (β = −0.24; SE, 0.05; P = 6.8 × 10−7). Regional cortical analyses showed a localization of the effects to the frontal, cingulate, and parietal lobes. Further, cognitive ability was lower for deletion carriers compared with noncarriers on 5 of 7 tasks.

    Conclusions and Relevance These findings, from the largest CNV neuroimaging study to date, provide evidence that 15q11.2 BP1-BP2 structural variation is associated with brain morphology and cognition, with deletion carriers being particularly affected. The pattern of results fits with known molecular functions of genes in the 15q11.2 BP1-BP2 region and suggests involvement of these genes in neuronal plasticity. These neurobiological effects likely contribute to the association of this CNV with neurodevelopmental disorders.
  • Van der Meer, D., Rokicki, J., Kaufmann, T., Córdova-Palomera, A., Moberget, T., Alnæs, D., Bettella, F., Frei, O., Trung Doan, N., Sønderby, I. E., Smeland, O. B., Agartz, I., Bertolino, A., Bralten, J., Brandt, C. L., Buitelaar, J. K., Djurovic, S., Van Donkelaar, M. M. J., Dørum, E. S., Espeseth, T. and 34 moreVan der Meer, D., Rokicki, J., Kaufmann, T., Córdova-Palomera, A., Moberget, T., Alnæs, D., Bettella, F., Frei, O., Trung Doan, N., Sønderby, I. E., Smeland, O. B., Agartz, I., Bertolino, A., Bralten, J., Brandt, C. L., Buitelaar, J. K., Djurovic, S., Van Donkelaar, M. M. J., Dørum, E. S., Espeseth, T., Faraone, S. V., Fernandez, G., Fisher, S. E., Franke, B., Haatveit, B., Hartman, C., Hoekstra, P. J., Haberg, A. K., Jönsson, E. G., Kolskår, K. K., Le Hellard, S., Lund, M. J., Lundervold, A. J., Lundervold, A., Melle, I., Monereo Sánchez, J., Norbom, L. C., Nordvik, J. E., Nyberg, L., Oosterlaan, J., Papalino, M., Papassotiropoulos, A., Pergola, G., De Quervain, D. J. F., Richard, G., Sanders, A.-M., Selvaggi, P., Shumskaya, E., Steen, V. M., Tønnesen, S., Ulrichsen, K. M., Zwiers, M., Andreassen, O. A., & Westlye, L. T. (2020). Brain scans from 21297 individuals reveal the genetic architecture of hippocampal subfield volumes. Molecular Psychiatry, 25, 3053-3065. doi:10.1038/s41380-018-0262-7.

    Abstract

    The hippocampus is a heterogeneous structure, comprising histologically distinguishable subfields. These subfields are differentially involved in memory consolidation, spatial navigation and pattern separation, complex functions often impaired in individuals with brain disorders characterized by reduced hippocampal volume, including Alzheimer’s disease (AD) and schizophrenia. Given the structural and functional heterogeneity of the hippocampal formation, we sought to characterize the subfields’ genetic architecture. T1-weighted brain scans (n = 21,297, 16 cohorts) were processed with the hippocampal subfields algorithm in FreeSurfer v6.0. We ran a genome-wide association analysis on each subfield, co-varying for whole hippocampal volume. We further calculated the single-nucleotide polymorphism (SNP)-based heritability of 12 subfields, as well as their genetic correlation with each other, with other structural brain features and with AD and schizophrenia. All outcome measures were corrected for age, sex and intracranial volume. We found 15 unique genome-wide significant loci across six subfields, of which eight had not been previously linked to the hippocampus. Top SNPs were mapped to genes associated with neuronal differentiation, locomotor behaviour, schizophrenia and AD. The volumes of all the subfields were estimated to be heritable (h2 from 0.14 to 0.27, all p < 1 × 10–16) and clustered together based on their genetic correlations compared with other structural brain features. There was also evidence of genetic overlap of subicular subfield volumes with schizophrenia. We conclude that hippocampal subfields have partly distinct genetic determinants associated with specific biological processes and traits. Taking into account this specificity may increase our understanding of hippocampal neurobiology and associated pathologies.

    Additional information

    41380_2018_262_MOESM1_ESM.docx
  • Van Os, M., De Jong, N. H., & Bosker, H. R. (2020). Fluency in dialogue: Turn‐taking behavior shapes perceived fluency in native and nonnative speech. Language Learning, 70(4), 1183-1217. doi:10.1111/lang.12416.

    Abstract

    Fluency is an important part of research on second language learning, but most research on language proficiency typically has not included oral fluency as part of interaction, even though natural communication usually occurs in conversations. The present study considered aspects of turn-taking behavior as part of the construct of fluency and investigated whether these aspects differentially influence perceived fluency ratings of native and non-native speech. Results from two experiments using acoustically manipulated speech showed that, in native speech, too ‘eager’ (interrupting a question with a fast answer) and too ‘reluctant’ answers (answering slowly after a long turn gap) negatively affected fluency ratings. However, in non-native speech, only too ‘reluctant’ answers led to lower fluency ratings. Thus, we demonstrate that acoustic properties of dialogue are perceived as part of fluency. By adding to our current understanding of dialogue fluency, these lab-based findings carry implications for language teaching and assessment

    Additional information

    data + R analysis script via osf
  • Van Donkelaar, M. M. J., Hoogman, M., Shumskaya, E., Buitelaar, J. K., Bralten, J., & Franke, B. (2020). Monoamine and neuroendocrine gene-sets associate with frustration-based aggression in a gender-specific manner. European Neuropsychopharmacology, 30, 75-86. doi:10.1016/j.euroneuro.2017.11.016.

    Abstract

    Investigating phenotypic heterogeneity in aggression and understanding the molecular biological basis of aggression subtypes may lead to new prevention and treatment options. In the current study, we evaluated the taxonomy of aggression and examined specific genetic mechanisms underlying aggression subtypes in healthy males and females. Confirmatory Factor Analysis (CFA) was used to replicate a recently reported three-factor model of the Reactive Proactive Questionnaire (RPQ) in healthy adults (n=661; median age 24.0 years; 41% male). Gene-set association analysis, aggregating common genetic variants within (a combination of) three molecular pathways previously implicated in aggression, i.e. serotonergic, dopaminergic, and neuroendocrine signaling, was conducted with MAGMA software in males and females separately (total n=395) for aggression subtypes. We replicate the three-factor CFA model of the RPQ, and found males to score significantly higher on one of these factors compared to females: proactive aggression. The genetic association analysis showed a female-specific association of genetic variation in the combined gene-set with a different factor of the RPQ; reactive aggression due to internal frustration. Both the neuroendocrine and serotonergic gene-sets contributed significantly to this association. Our genetic findings are subtype- and sex-specific, stressing the value of efforts to reduce heterogeneity in research of aggression etiology. Importantly, subtype- and sex-differences in the underlying pathophysiology of aggression suggest that optimal treatment options will have to be tailored to the individual patient. Male and female needs of intervention might differ, stressing the need for sex-specific further research of aggression. Our work highlights opportunities for sample size maximization offered by population-based studies of aggression.
  • Vanlangendonck, F., Peeters, D., Rüschemeyer, S.-A., & Dijkstra, T. (2020). Mixing the stimulus list in bilingual lexical decision turns cognate facilitation effects into mirrored inhibition effects. Bilingualism: Language and Cognition, 23(4), 836-844. doi:10.1017/S1366728919000531.

    Abstract

    To test the BIA+ and Multilink models’ accounts of how bilinguals process words with different degrees of cross-linguistic orthographic and semantic overlap, we conducted two experiments manipulating stimulus list composition. Dutch-English late bilinguals performed two English lexical decision tasks including the same set of cognates, interlingual homographs, English control words, and pseudowords. In one task, half of the pseudowords were replaced with Dutch words, requiring a ‘no’ response. This change from pure to mixed language list context was found to turn cognate facilitation effects into inhibition. Relative to control words, larger effects were found for cognate pairs with an increasing cross-linguistic form overlap. Identical cognates produced considerably larger effects than non-identical cognates, supporting their special status in the bilingual lexicon. Response patterns for different item types are accounted for in terms of the items’ lexical representation and their binding to ‘yes’ and ‘no’ responses in pure vs mixed lexical decision.

    Additional information

    S1366728919000531sup001.pdf
  • Verdonschot, R. G., & Masuda, H. (2020). Sumacku or Smack? The value of analyzing acoustic signals when investigating the fundamental phonological unit of language production. Psychological Research, 84(3), 547-557. doi:10.1007/s00426-018-1073-9.

    Abstract

    An ongoing debate in the speech production literature suggests that the initial building block to build up speech sounds differs between languages. That is, Germanic languages are suggested to use the phoneme, but Japanese and Chinese are proposed to use the mora or syllable, respectively. Several studies investigated this matter from a chronometric perspective (i.e., RTs and accuracy). However, a less attention has been paid to the actual acoustic utterances. The current study investigated the verbal responses of two Japanese-English bilingual groups of different proficiency levels (i.e., high and low) when naming English words and found that the presence or absence of vowel epenthesis depended on proficiency. The results indicate that: (1) English word pronunciation by low-proficient Japanese English bilinguals is likely based on their L1 (Japanese) building block and (2) that future studies would benefit from analyzing the acoustic data as well when making inferences from chronometric data.
  • Verheijen, J., Wong, S. Y., Rowe, J. H., Raymond, K., Stoddard, J., Delmonte, O. M., Bosticardo, M., Dobbs, K., Niemela, J., Calzoni, E., Pai, S.-Y., Choi, U., Yamazaki, Y., Comeau, A. M., Janssen, E., Henderson, L., Hazen, M., Berry, G., Rosenzweig, S. D., Aldhekri, H. H. and 3 moreVerheijen, J., Wong, S. Y., Rowe, J. H., Raymond, K., Stoddard, J., Delmonte, O. M., Bosticardo, M., Dobbs, K., Niemela, J., Calzoni, E., Pai, S.-Y., Choi, U., Yamazaki, Y., Comeau, A. M., Janssen, E., Henderson, L., Hazen, M., Berry, G., Rosenzweig, S. D., Aldhekri, H. H., He, M., Notarangelo, L. D., & Morava, E. (2020). Defining a new immune deficiency syndrome: MAN2B2-CDG. Journal of Allergy and Clinical Immunology, 145(3), 1008-1011. doi:10.1016/j.jaci.2019.11.016.
  • Verheijen, J., Tahata, S., Kozicz, T., Witters, P., & Morava, E. (2020). Therapeutic approaches in Congenital Disorders of Glycosylation (CDG) involving N-linked glycosylation: An update. Genetics in Medicine, 22(2), 268-279. doi:10.1038/s41436-019-0647-2.

    Abstract

    Congenital disorders of glycosylation (CDG) are a group of clinically and genetically heterogeneous metabolic disorders. Over 150 CDG types have been described. Most CDG types are ultrarare disorders. CDG types affecting N-glycosylation are the most common type of CDG with emerging therapeutic possibilities. This review is an update on the available therapies for disorders affecting the N-linked glycosylation pathway. In the first part of the review, we highlight the clinical presentation, general principles of management, and disease-specific therapies for N-linked glycosylation CDG types, organized by organ system. The second part of the review focuses on the therapeutic strategies currently available and under development. We summarize the successful (pre-) clinical application of nutritional therapies, transplantation, activated sugars, gene therapy, and pharmacological chaperones and outline the anticipated expansion of the therapeutic possibilities in CDG. We aim to provide a comprehensive update on the treatable aspects of CDG types involving N-linked glycosylation, with particular emphasis on disease-specific treatment options for the involved organ systems; call for natural history studies; and present current and future therapeutic strategies for CDG.
  • Vernes, S. C., & Wilkinson, G. S. (2020). Behaviour, biology, and evolution of vocal learning in bats. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375(1789): 20190061. doi:10.1098/rstb.2019.0061.

    Abstract

    The comparative approach can provide insight into the evolution of human speech, language and social communication by studying relevant traits in animal systems. Bats are emerging as a model system with great potential to shed light on these processes given their learned vocalizations, close social interactions, and mammalian brains and physiology. A recent framework outlined the multiple levels of investigation needed to understand vocal learning across a broad range of non-human species, including cetaceans, pinnipeds, elephants, birds and bats. Here, we apply this framework to the current state-of-the-art in bat research. This encompasses our understanding of the abilities bats have displayed for vocal learning, what is known about the timing and social structure needed for such learning, and current knowledge about the prevalence of the trait across the order. It also addresses the biology (vocal tract morphology, neurobiology and genetics) and evolution of this trait. We conclude by highlighting some key questions that should be answered to advance our understanding of the biological encoding and evolution of speech and spoken communication. This article is part of the theme issue 'What can animal communication teach us about human language?'

    Additional information

    earlier version of article on BioRxiv
  • Vogels, J., Howcroft, D. M., Tourtouri, E. N., & Demberg, V. (2020). How speakers adapt object descriptions to listeners under load. Language, Cognition and Neuroscience, 35(1), 78-92. doi:10.1080/23273798.2019.1648839.

    Abstract

    A controversial issue in psycholinguistics is the degree to which speakers employ audience design during language production. Hypothesising that a consideration of the listener’s needs is particularly relevant when the listener is under cognitive load, we had speakers describe objects for a listener performing an easy or a difficult simulated driving task. We predicted that speakers would introduce more redundancy in their descriptions in the difficult driving task, thereby accommodating the listener’s reduced cognitive capacity. The results showed that speakers did not adapt their descriptions to a change in the listener’s cognitive load. However, speakers who had experienced the driving task themselves before and who were presented with the difficult driving task first were more redundant than other speakers. These findings may suggest that speakers only consider the listener’s needs in the presence of strong enough cues, and do not update their beliefs about these needs during the task.
  • De Vos, J., Schriefers, H., & Lemhöfer, K. (2020). Does study language (Dutch versus English) influence study success of Dutch and German students in theNetherlands? Dutch Journal of Applied Linguistics, 9, 60-78. doi:10.1075/dujal.19008.dev.

    Abstract

    We investigated whether the language of instruction (Dutch or English) influenced the study success of 614 Dutch and German first-year psychology students in the Netherlands. The Dutch students who were instructed in Dutch studied in their native language (L1), the other students in a second language (L2). In addition, only the Dutch students studied in their home country. Both these variables could potentially influence study success, operationalised as the number of European Credits (ECs) the students obtained, their grades, and drop-out rates. The L1 group outperformed the three L2 groups with respect to grades, but there were no significant differences in ECs and drop-out rates (although descriptively, the L1 group still performed best). In conclusion, this study shows an advantage of studying in the L1 when it comes to grades, and thereby contributes to the current debate in the Dutch media regarding the desirability of offering degrees taught in English.
  • Waymel, A., Friedrich, P., Bastian, P.-A., Forkel, S. J., & Thiebaut de Schotten, M. (2020). Anchoring the human olfactory system within a functional gradient. NeuroImage, 216: 116863. doi:10.1016/j.neuroimage.2020.116863.

    Abstract

    Margulies et al. (2016) demonstrated the existence of at least five independent functional connectivity gradients in the human brain. However, it is unclear how these functional gradients might link to anatomy. The dual origin theory proposes that differences in cortical cytoarchitecture originate from two trends of progressive differentiation between the different layers of the cortex, referred to as the hippocampocentric and olfactocentric systems. When conceptualising the functional connectivity gradients within the evolutionary framework of the Dual Origin theory, the first gradient likely represents the hippocampocentric system anatomically. Here we expand on this concept and demonstrate that the fifth gradient likely links to the olfactocentric system. We describe the anatomy of the latter as well as the evidence to support this hypothesis. Together, the first and fifth gradients might help to model the Dual Origin theory of the human brain and inform brain models and pathologies.
  • Weissbart, H., Kandylaki, K. D., & Reichenbach, T. (2020). Cortical tracking of surprisal during continuous speech comprehension. Journal of Cognitive Neuroscience, 32, 155-166. doi:10.1162/jocn_a_01467.

    Abstract

    Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
  • Whitaker, K., & Guest, O. (2020). #bropenscience is broken science: Kirstie Whitaker and Olivia Guest ask how open ‘open science’ really is. The Psychologist, 33, 34-37.
  • Willems, R. M., Nastase, S. A., & Milivojevic, B. (2020). Narratives for Neuroscience. Trends in Neurosciences, 43(5), 271-273. doi:10.1016/j.tins.2020.03.003.

    Abstract

    People organize and convey their thoughts according to narratives. However, neuroscientists are often reluctant to incorporate narrative stimuli into their experiments. We argue that narratives deserve wider adoption in human neuroscience because they tap into the brain’s native machinery for representing the world and provide rich variability for testing hypotheses.
  • Wilson, B., Spierings, M., Ravignani, A., Mueller, J. L., Mintz, T. H., Wijnen, F., Van der Kant, A., Smith, K., & Rey, A. (2020). Non‐adjacent dependency learning in humans and other animals. Topics in Cognitive Science, 12(3), 843-858. doi:10.1111/tops.12381.

    Abstract

    Learning and processing natural language requires the ability to track syntactic relationships between words and phrases in a sentence, which are often separated by intervening material. These nonadjacent dependencies can be studied using artificial grammar learning paradigms and structured sequence processing tasks. These approaches have been used to demonstrate that human adults, infants and some nonhuman animals are able to detect and learn dependencies between nonadjacent elements within a sequence. However, learning nonadjacent dependencies appears to be more cognitively demanding than detecting dependencies between adjacent elements, and only occurs in certain circumstances. In this review, we discuss different types of nonadjacent dependencies in language and in artificial grammar learning experiments, and how these differences might impact learning. We summarize different types of perceptual cues that facilitate learning, by highlighting the relationship between dependent elements bringing them closer together either physically, attentionally, or perceptually. Finally, we review artificial grammar learning experiments in human adults, infants, and nonhuman animals, and discuss how similarities and differences observed across these groups can provide insights into how language is learned across development and how these language‐related abilities might have evolved.
  • Wittenburg, P., Lautenschlager, M., Thiemann, H., Baldauf, C., & Trilsbeek, P. (2020). FAIR Practices in Europe. Data Intelligence, 2(1-2), 257-263. doi:10.1162/dint_a_00048.

    Abstract

    Institutions driving fundamental research at the cutting edge such as for example from the Max Planck Society (MPS) took steps to optimize data management and stewardship to be able to address new scientific questions. In this paper we selected three institutes from the MPS from the areas of humanities, environmental sciences and natural sciences as examples to indicate the efforts to integrate large amounts of data from collaborators worldwide to create a data space that is ready to be exploited to get new insights based on data intensive science methods. For this integration the typical challenges of fragmentation, bad quality and also social differences had to be overcome. In all three cases, well-managed repositories that are driven by the scientific needs and harmonization principles that have been agreed upon in the community were the core pillars. It is not surprising that these principles are very much aligned with what have now become the FAIR principles. The FAIR principles confirm the correctness of earlier decisions and their clear formulation identified the gaps which the projects need to address.
  • Wnuk, E., Laophairoj, R., & Majid, A. (2020). Smell terms are not rara: A semantic investigation of odor vocabulary in Thai. Linguistics, 58(4), 937-966. doi:10.1515/ling-2020-0009.
  • Xiong, K., Verdonschot, R. G., & Tamaoka, K. (2020). The time course of brain activity in reading identical cognates: An ERP study of Chinese - Japanese bilinguals. Journal of Neurolinguistics, 55: 100911. doi:10.1016/j.jneuroling.2020.100911.

    Abstract

    Previous studies suggest that bilinguals' lexical access is language non-selective, especially for orthographically identical translation equivalents across languages (i.e., identical cognates). The present study investigated how such words (e.g., meaning "school" in both Chinese and Japanese) are processed in the (late) Chinese - Japanese bilingual brain. Using an L2-Japanese lexical decision task, both behavioral and electrophysiological data were collected. Reaction times (RTs), as well as the N400 component, showed that cognates are more easily recognized than non-cognates. Additionally, an early component (i.e., the N250), potentially reflecting activation at the word-form level, was also found. Cognates elicited a more positive N250 than non-cognates in the frontal region, indicating that the cognate facilitation effect occurred at an early stage of word formation for languages with logographic scripts.
  • Yang, W., Chan, A., Chang, F., & Kidd, E. (2020). Four-year-old Mandarin-speaking children’s online comprehension of relative clauses. Cognition, 196: 104103. doi:10.1016/j.cognition.2019.104103.

    Abstract

    A core question in language acquisition is whether children’s syntactic processing is experience-dependent and language-specific, or whether it is governed by abstract, universal syntactic machinery. We address this question by presenting corpus and on-line processing dat a from children learning Mandarin Chinese, a language that has been important in debates about the universality of parsing processes. The corpus data revealed that two different relative clause constructions in Mandarin are differentially used to modify syntactic subjects and objects. In the experiment, 4-year-old children’s eye-movements were recorded as they listened to the two RC construction types (e.g., Can you pick up the pig that pushed the sheep?). A permutation analysis showed that children’s ease of comprehension was closely aligned with the distributional frequencies, suggesting syntactic processing preferences are shaped by the input experience of these constructions.

    Additional information

    1-s2.0-S001002771930277X-mmc1.pdf
  • Yang, J., Cai, Q., & Tian, X. (2020). How do we segment text? Two-stage chunking operation in reading. eNeuro, 7(3): ENEURO.0425-19.2020. doi:10.1523/ENEURO.0425-19.2020.

    Abstract

    Chunking in language comprehension is a process that segments continuous linguistic input into smaller chunks that are in the reader’s mental lexicon. Effective chunking during reading facilitates disambiguation and enhances efficiency for comprehension. However, the chunking mechanisms remain elusive, especially in reading given that information arrives simultaneously yet the written systems may not have explicit cues for labeling boundaries such as Chinese. What are the mechanisms of chunking that mediates the reading of the text that contains hierarchical information? We investigated this question by manipulating the lexical status of the chunks at distinct levels in four-character Chinese strings, including the two-character local chunk and four-character global chunk. Male and female human participants were asked to make lexical decisions on these strings in a behavioral experiment, followed by a passive reading task when their electroencephalography (EEG) was recorded. The behavioral results showed that the lexical decision time of lexicalized two-character local chunks was influenced by the lexical status of the four-character global chunk, but not vice versa, which indicated the processing of global chunks possessed priority over the local chunks. The EEG results revealed that familiar lexical chunks were detected simultaneously at both levels and further processed in a different temporal order – the onset of lexical access for the global chunks was earlier than that of local chunks. These consistent results suggest a two-stage operation for chunking in reading–– the simultaneous detection of familiar lexical chunks at multiple levels around 100 ms followed by recognition of chunks with global precedence.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2020). The influence of orthography on speech production: Evidence from masked priming in word-naming and picture-naming tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(8), 1570-1589. doi:10.1037/xlm0000829.

    Abstract

    In a masked priming word-naming task, a facilitation due to the initial-segmental sound overlap for 2-character kanji prime-target pairs was affected by certain orthographic properties (Yoshihara, Nakayama, Verdonschot, & Hino, 2017). That is, the facilitation that was due to the initial mora overlap occurred only when the mora was the whole pronunciation of their initial kanji characters (i.e., match pairs; e.g., /ka-se.ki/-/ka-rjo.ku/). When the shared initial mora was only a part of the kanji characters' readings, however, there was no facilitation (i.e., mismatch pairs; e.g., /ha.tu-a.N/-/ha.ku-bu.tu/). In the present study, we used a masked priming picture-naming task to investigate whether the previous results were relevant only when the orthography of targets is visually presented. In Experiment 1. the main findings of our word-naming task were fully replicated in a picture-naming task. In Experiments 2 and 3. the absence of facilitation for the mismatch pairs were confirmed with a new set of stimuli. On the other hand, a significant facilitation was observed for the match pairs that shared the 2 initial morae (in Experiment 4), which was again consistent with the results of our word-naming study. These results suggest that the orthographic properties constrain the phonological expression of masked priming for kanji words across 2 tasks that are likely to differ in how phonology is retrieved. Specifically, we propose that orthography of a word is activated online and constrains the phonological encoding processes in these tasks.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2020). Language selection contributes to intrusion errors in speaking: Evidence from picture naming. Bilingualism: Language and Cognition, 23, 788-800. doi:10.1017/S1366728919000683.

    Abstract

    Bilinguals usually select the right language to speak for the particular context they are in, but sometimes the nontarget language intrudes. Despite a large body of research into language selection and language control, it remains unclear where intrusion errors originate from. These errors may be due to incorrect selection of the nontarget language at the conceptual level, or be a consequence of erroneous word selection (despite correct language selection) at the lexical level. We examined the former possibility in two language switching experiments using a manipulation that supposedly affects language selection on the conceptual level, namely whether the conversational language context was associated with the target language (congruent) or with the alternative language (incongruent) on a trial. Both experiments showed that language intrusion errors occurred more often in incongruent than in congruent contexts, providing converging evidence that language selection during concept preparation is one driving force behind language intrusion.
  • Zheng, X., Roelofs, A., Erkan, H., & Lemhöfer, K. (2020). Dynamics of inhibitory control during bilingual speech production: An electrophysiological study. Neuropsychologia, 140: 107387. doi:10.1016/j.neuropsychologia.2020.107387.

    Abstract

    Bilingual speakers have to control their languages to avoid interference, which may be achieved by enhancing the target language and/or inhibiting the nontarget language. Previous research suggests that bilinguals use inhibition (e.g., Jackson et al., 2001), which should be reflected in the N2 component of the event-related potential (ERP) in the EEG. In the current study, we investigated the dynamics of inhibitory control by measuring the N2 during language switching and repetition in bilingual picture naming. Participants had to name pictures in Dutch or English depending on the cue. A run of same-language trials could be short (two or three trials) or long (five or six trials). We assessed whether RTs and N2 changed over the course of same-language runs, and at a switch between languages. Results showed that speakers named pictures more quickly late as compared to early in a run of same-language trials. Moreover, they made a language switch more quickly after a long run than after a short run. This run-length effect was only present in the first language (L1), not in the second language (L2). In ERPs, we observed a widely distributed switch effect in the N2, which was larger after a short run than after a long run. This effect was only present in the L2, not in the L1, although the difference was not significant between languages. In contrast, the N2 was not modulated during a same-language run. Our results suggest that the nontarget language is inhibited at a switch, but not during the repeated use of the target language.

    Additional information

    Data availability

    Files private

    Request files

Share this page