Publications

Displaying 501 - 600 of 840
  • Monaghan, P., & Roberts, S. G. (2019). Cognitive influences in language evolution: Psycholinguistic predictors of loan word borrowing. Cognition, 186, 147-158. doi:10.1016/j.cognition.2019.02.007.

    Abstract

    Languages change due to social, cultural, and cognitive influences. In this paper, we provide an assessment of these cognitive influences on diachronic change in the vocabulary. Previously, tests of stability and change of vocabulary items have been conducted on small sets of words where diachronic change is imputed from cladistics studies. Here, we show for a substantially larger set of words that stability and change in terms of documented borrowings of words into English and into Dutch can be predicted by psycholinguistic properties of words that reflect their representational fidelity. We found that grammatical category, word length, age of acquisition, and frequency predict borrowing rates, but frequency has a non-linear relationship. Frequency correlates negatively with probability of borrowing for high-frequency words, but positively for low-frequency words. This borrowing evidence documents recent, observable diachronic change in the vocabulary enabling us to distinguish between change associated with transmission during language acquisition and change due to innovations by proficient speakers.
  • Mongelli, V., Meijs, E. L., Van Gaal, S., & Hagoort, P. (2019). No language unification without neural feedback: How awareness affects sentence processing. Neuroimage, 202: 116063. doi:10.1016/j.neuroimage.2019.116063.

    Abstract

    How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
  • Montero-Melis, G., & Jaeger, T. F. (2020). Changing expectations mediate adaptation in L2 production. Bilingualism: Language and Cognition, 23(3), 602-617. doi:10.1017/S1366728919000506.

    Abstract


    Native language (L1) processing draws on implicit expectations. An open question is whether non-native learners of a second language (L2) similarly draw on expectations, and whether these expectations are based on learners’ L1 or L2 knowledge. We approach this question by studying inverse preference effects on lexical encoding. L1 and L2 speakers of Spanish described motion events, while they were either primed to express path, manner, or neither. In line with other work, we find that L1 speakers adapted more strongly after primes that are unexpected in their L1. For L2 speakers, adaptation depended on their L2 proficiency: The least proficient speakers exhibited the inverse preference effect on adaptation based on what was unexpected in their L1; but the more proficient speakers were, the more they exhibited inverse preference effects based on what was unexpected in the L2. We discuss implications for L1 transfer and L2 acquisition.
  • Montero-Melis, G., Isaksson, P., Van Paridon, J., & Ostarek, M. (2020). Does using a foreign language reduce mental imagery? Cognition, 196: 104134. doi:10.1016/j.cognition.2019.104134.

    Abstract

    In a recent article, Hayakawa and Keysar (2018) propose that mental imagery is less vivid when evoked in a foreign than in a native language. The authors argue that reduced mental imagery could even account for moral foreign language effects, whereby moral choices become more utilitarian when made in a foreign language. Here we demonstrate that Hayakawa and Keysar's (2018) key results are better explained by reduced language comprehension in a foreign language than by less vivid imagery. We argue that the paradigm used in Hayakawa and Keysar (2018) does not provide a satisfactory test of reduced imagery and we discuss an alternative paradigm based on recent experimental developments.

    Additional information

    Supplementary data and scripts
  • Morgan, T. J. H., Acerbi, A., & Van Leeuwen, E. J. C. (2019). Copy-the-majority of instances or individuals? Two approaches to the majority and their consequences for conformist decision-making. PLoS One, 14(1): e021074. doi:10.1371/journal.pone.0210748.

    Abstract

    Cultural evolution is the product of the psychological mechanisms that underlie individual decision making. One commonly studied learning mechanism is a disproportionate preference for majority opinions, known as conformist transmission. While most theoretical and experimental work approaches the majority in terms of the number of individuals that perform a behaviour or hold a belief, some recent experimental studies approach the majority in terms of the number of instances a behaviour is performed. Here, we use a mathematical model to show that disagreement between these two notions of the majority can arise when behavioural variants are performed at different rates, with different salience or in different contexts (variant overrepresentation) and when a subset of the population act as demonstrators to the whole population (model biases). We also show that because conformist transmission changes the distribution of behaviours in a population, how observers approach the majority can cause populations to diverge, and that this can happen even when the two approaches to the majority agree with regards to which behaviour is in the majority. We discuss these results in light of existing findings, ranging from political extremism on twitter to studies of animal foraging behaviour. We conclude that the factors we considered (variant overrepresentation and model biases) are plausibly widespread. As such, it is important to understand how individuals approach the majority in order to understand the effects of majority influence in cultural evolution.
  • Mudd, K., Lutzenberger, H., De Vos, C., Fikkert, P., Crasborn, O., & De Boer, B. (2020). The effect of sociolinguistic factors on variation in the Kata Kolok lexicon. Asia-Pacific Language Variation, 6(1), 53-88. doi:10.1075/aplv.19009.mud.

    Abstract

    Sign languages can be categorized as shared sign languages or deaf community sign languages, depending on the context in which they emerge. It has been suggested that shared sign languages exhibit more variation in the expression of everyday concepts than deaf community sign languages (Meir, Israel, Sandler, Padden, & Aronoff, 2012). For deaf community sign languages, it has been shown that various sociolinguistic factors condition this variation. This study presents one of the first in-depth investigations of how sociolinguistic factors (deaf status, age, clan, gender and having a deaf family member) affect lexical variation in a shared sign language, using a picture description task in Kata Kolok. To study lexical variation in Kata Kolok, two methodologies are devised: the identification of signs by underlying iconic motivation and mapping, and a way to compare individual repertoires of signs by calculating the lexical distances between participants. Alongside presenting novel methodologies to study this type of sign language, we present preliminary evidence of sociolinguistic factors that may influence variation in the Kata Kolok lexicon.
  • Muhinyi, A., Hesketh, A., Stewart, A. J., & Rowland, C. F. (2020). Story choice matters for caregiver extra-textual talk during shared reading with preschoolers. Journal of Child Language, 47(3), 633-654. doi:10.1017/S0305000919000783.

    Abstract



    This study aimed to examine the influence of the complexity of the story-book on caregiver extra-textual talk (i.e., interactions beyond text reading) during shared reading with preschool-age children. Fifty-three mother–child dyads (3;00–4;11) were video-recorded sharing two ostensibly similar picture-books: a simple story (containing no false belief) and a complex story (containing a false belief central to the plot, which provided content that was more challenging for preschoolers to understand). Book-reading interactions were transcribed and coded. Results showed that the complex stories facilitated more extra-textual talk from mothers, and a higher quality of extra-textual talk (as indexed by linguistic richness and level of abstraction). Although the type of story did not affect the number of questions mothers posed, more elaborative follow-ups on children's responses were provided by mothers when sharing complex stories. Complex stories may facilitate more and linguistically richer caregiver extra-textual talk, having implications for preschoolers’ developing language abilities.
  • Nakamoto, T., Suei, Y., Konishi, M., Kanda, T., Verdonschot, R. G., & Kakimoto, N. (2019). Abnormal positioning of the common carotid artery clinically diagnosed as a submandibular mass. Oral Radiology, 35(3), 331-334. doi:10.1007/s11282-018-0355-7.

    Abstract

    The common carotid artery (CCA) usually runs along the long axis of the neck, although it is occasionally found in an abnormal position or is displaced. We report a case of an 86-year-old woman in whom the CCA was identified in the submandibular area. The patient visited our clinic and reported soft tissue swelling in the right submandibular area. It resembled a tumor mass or a swollen lymph node. Computed tomography showed that it was the right CCA that had been bent forward and was running along the submandibular subcutaneous area. Ultrasonography verified the diagnosis. No other lesions were found on the diagnostic images. Consequently, the patient was diagnosed as having abnormal CCA positioning. Although this condition generally requires no treatment, it is important to follow-up the abnormality with diagnostic imaging because of the risk of cerebrovascular disorders.
  • Nakamoto, T., Hatsuta, S., Yagi, S., Verdonschot, R. G., Taguchi, A., & Kakimoto, N. (2020). Computer-aided diagnosis system for osteoporosis based on quantitative evaluation of mandibular lower border porosity using panoramic radiographs. Dentomaxillofacial Radiology, 49(4): 20190481. doi:10.1259/dmfr.20190481.

    Abstract

    Objectives: A new computer-aided screening system for osteoporosis using panoramic radiographs was developed. The conventional system could detect porotic changes within the lower border of the mandible, but its severity could not be evaluated. Our aim was to enable the system to measure severity by implementing a linear bone resorption severity index (BRSI) based on the cortical bone shape.
    Methods: The participants were 68 females (>50 years) who underwent panoramic radiography and lumbar spine bone density measurements. The new system was designed to extract the lower border of the mandible as region of interests and convert them into morphological skeleton line images. The total perimeter length of the skeleton lines was defined as the BRSI. 40 images were visually evaluated for the presence of cortical bone porosity. The correlation between visual evaluation and BRSI of the participants, and the optimal threshold value of BRSI for new system were investigated through a receiver operator characteristic analysis. The diagnostic performance of the new system was evaluated by comparing the results from new system and lumbar bone density tests using 28 participants.
    Results: BRSI and lumbar bone density showed a strong negative correlation (p < 0.01). BRSI showed a strong correlation with visual evaluation. The new system showed high diagnostic efficacy with sensitivity of 90.9%, specificity of 64.7%, and accuracy of 75.0%.
    Conclusions: The new screening system is able to quantitatively evaluate mandibular cortical porosity. This allows for preventive screening for osteoporosis thereby enhancing clinical prospects.
  • Nakamoto, T., Taguchi, A., Verdonschot, R. G., & Kakimoto, N. (2019). Improvement of region of interest extraction and scanning method of computer-aided diagnosis system for osteoporosis using panoramic radiographs. Oral Radiology, 35(2), 143-151. doi:10.1007/s11282-018-0330-3.

    Abstract

    ObjectivesPatients undergoing osteoporosis treatment benefit greatly from early detection. We previously developed a computer-aided diagnosis (CAD) system to identify osteoporosis using panoramic radiographs. However, the region of interest (ROI) was relatively small, and the method to select suitable ROIs was labor-intensive. This study aimed to expand the ROI and perform semi-automatized extraction of ROIs. The diagnostic performance and operating time were also assessed.MethodsWe used panoramic radiographs and skeletal bone mineral density data of 200 postmenopausal women. Using the reference point that we defined by averaging 100 panoramic images as the lower mandibular border under the mental foramen, a 400x100-pixel ROI was automatically extracted and divided into four 100x100-pixel blocks. Valid blocks were analyzed using program 1, which examined each block separately, and program 2, which divided the blocks into smaller segments and performed scans/analyses across blocks. Diagnostic performance was evaluated using another set of 100 panoramic images.ResultsMost ROIs (97.0%) were correctly extracted. The operation time decreased to 51.4% for program 1 and to 69.3% for program 2. The sensitivity, specificity, and accuracy for identifying osteoporosis were 84.0, 68.0, and 72.0% for program 1 and 92.0, 62.7, and 70.0% for program 2, respectively. Compared with the previous conventional system, program 2 recorded a slightly higher sensitivity, although it occasionally also elicited false positives.ConclusionsPatients at risk for osteoporosis can be identified more rapidly using this new CAD system, which may contribute to earlier detection and intervention and improved medical care.
  • Nayernia, L., Van den Vijver, R., & Indefrey, P. (2019). The influence of orthography on phonemic knowledge: An experimental investigation on German and Persian. Journal of Psycholinguistic Research, 48(6), 1391-1406. doi:10.1007/s10936-019-09664-9.

    Abstract

    This study investigated whether the phonological representation of a word is modulated by its orthographic representation in case of a mismatch between the two representations. Such a mismatch is found in Persian, where short vowels are represented phonemically but not orthographically. Persian adult literates, Persian adult illiterates, and German adult literates were presented with two auditory tasks, an AX-discrimination task and a reversal task. We assumed that if orthographic representations influence phonological representations, Persian literates should perform worse than Persian illiterates or German literates on items with short vowels in these tasks. The results of the discrimination tasks showed that Persian literates and illiterates as well as German literates were approximately equally competent in discriminating short vowels in Persian words and pseudowords. Persian literates did not well discriminate German words containing phonemes that differed only in vowel length. German literates performed relatively poorly in discriminating German homographic words that differed only in vowel length. Persian illiterates were unable to perform the reversal task in Persian. The results of the other two participant groups in the reversal task showed the predicted poorer performance of Persian literates on Persian items containing short vowels compared to items containing long vowels only. German literates did not show this effect in German. Our results suggest two distinct effects of orthography on phonemic representations: whereas the lack of orthographic representations seems to affect phonemic awareness, homography seems to affect the discriminability of phonemic representations.
  • Nazzi, T., & Cutler, A. (2019). How consonants and vowels shape spoken-language recognition. Annual Review of Linguistics, 5, 25-47. doi:10.1146/annurev-linguistics-011718-011919.

    Abstract

    All languages instantiate a consonant/vowel contrast. This contrast has processing consequences at different levels of spoken-language recognition throughout the lifespan. In adulthood, lexical processing is more strongly associated with consonant than with vowel processing; this has been demonstrated across 13 languages from seven language families and in a variety of auditory lexical-level tasks (deciding whether a spoken input is a word, spotting a real word embedded in a minimal context, reconstructing a word minimally altered into a pseudoword, learning new words or the “words” of a made-up language), as well as in written-word tasks involving phonological processing. In infancy, a consonant advantage in word learning and recognition is found to emerge during development in some languages, though possibly not in others, revealing that the stronger lexicon–consonant association found in adulthood is learned. Current research is evaluating the relative contribution of the early acquisition of the acoustic/phonetic and lexical properties of the native language in the emergence of this association
  • Need, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I. and 13 moreNeed, A. C., Ge, D., Weale, M. E., Maia, J., Feng, S., Heinzen, E. L., Shianna, K. V., Yoon, W., Kasperavičiūtė, D., Gennarelli, M., Strittmatter, W. J., Bonvicini, C., Rossi, G., Jayathilake, K., Cola, P. A., McEvoy, J. P., Keefe, R. S. E., Fisher, E. M. C., St. Jean, P. L., Giegling, I., Hartmann, A. M., Möller, H.-J., Ruppert, A., Fraser, G., Crombie, C., Middleton, L. T., St. Clair, D., Roses, A. D., Muglia, P., Francks, C., Rujescu, D., Meltzer, H. Y., & Goldstein, D. B. (2009). A genome-wide investigation of SNPs and CNVs in schizophrenia. PLoS Genetics, 5(2), e1000373. doi:10.1371/journal.pgen.1000373.

    Abstract

    We report a genome-wide assessment of single nucleotide polymorphisms (SNPs) and copy number variants (CNVs) in schizophrenia. We investigated SNPs using 871 patients and 863 controls, following up the top hits in four independent cohorts comprising 1,460 patients and 12,995 controls, all of European origin. We found no genome-wide significant associations, nor could we provide support for any previously reported candidate gene or genome-wide associations. We went on to examine CNVs using a subset of 1,013 cases and 1,084 controls of European ancestry, and a further set of 60 cases and 64 controls of African ancestry. We found that eight cases and zero controls carried deletions greater than 2 Mb, of which two, at 8p22 and 16p13.11-p12.4, are newly reported here. A further evaluation of 1,378 controls identified no deletions greater than 2 Mb, suggesting a high prior probability of disease involvement when such deletions are observed in cases. We also provide further evidence for some smaller, previously reported, schizophrenia-associated CNVs, such as those in NRXN1 and APBA2. We could not provide strong support for the hypothesis that schizophrenia patients have a significantly greater “load” of large (>100 kb), rare CNVs, nor could we find common CNVs that associate with schizophrenia. Finally, we did not provide support for the suggestion that schizophrenia-associated CNVs may preferentially disrupt genes in neurodevelopmental pathways. Collectively, these analyses provide the first integrated study of SNPs and CNVs in schizophrenia and support the emerging view that rare deleterious variants may be more important in schizophrenia predisposition than common polymorphisms. While our analyses do not suggest that implicated CNVs impinge on particular key pathways, we do support the contribution of specific genomic regions in schizophrenia, presumably due to recurrent mutation. On balance, these data suggest that very few schizophrenia patients share identical genomic causation, potentially complicating efforts to personalize treatment regimens.
  • Newbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V. and 11 moreNewbury, D. F., Winchester, L., Addis, L., Paracchini, S., Buckingham, L.-L., Clark, A., Cohen, W., Cowie, H., Dworzynski, K., Everitt, A., Goodyer, I. M., Hennessy, E., Kindley, A. D., Miller, L. L., Nasir, J., O'Hare, A., Shaw, D., Simkin, Z., Simonoff, E., Slonims, V., Watson, J., Ragoussis, J., Fisher, S. E., Seckl, J. R., Helms, P. J., Bolton, P. F., Pickles, A., Conti-Ramsden, G., Baird, G., Bishop, D. V., & Monaco, A. P. (2009). CMIP and ATP2C2 modulate phonological short-term memory in language impairment. American Journal of Human Genetics, 85(2), 264-272. doi:10.1016/j.ajhg.2009.07.004.

    Abstract

    Specific language impairment (SLI) is a common developmental disorder haracterized by difficulties in language acquisition despite otherwise normal development and in the absence of any obvious explanatory factors. We performed a high-density screen of SLI1, a region of chromosome 16q that shows highly significant and consistent linkage to nonword repetition, a measure of phonological short-term memory that is commonly impaired in SLI. Using two independent language-impaired samples, one family-based (211 families) and another selected from a population cohort on the basis of extreme language measures (490 cases), we detected association to two genes in the SLI1 region: that encoding c-maf-inducing protein (CMIP, minP = 5.5 × 10−7 at rs6564903) and that encoding calcium-transporting ATPase, type2C, member2 (ATP2C2, minP = 2.0 × 10−5 at rs11860694). Regression modeling indicated that each of these loci exerts an independent effect upon nonword repetition ability. Despite the consistent findings in language-impaired samples, investigation in a large unselected cohort (n = 3612) did not detect association. We therefore propose that variants in CMIP and ATP2C2 act to modulate phonological short-term memory primarily in the context of language impairment. As such, this investigation supports the hypothesis that some causes of language impairment are distinct from factors that influence normal language variation. This work therefore implicates CMIP and ATP2C2 in the etiology of SLI and provides molecular evidence for the importance of phonological short-term memory in language acquisition.

    Additional information

    mmc1.pdf
  • Newman-Norlund, S. E., Noordzij, M. L., Newman-Norlund, R. D., Volman, I. A., De Ruiter, J. P., Hagoort, P., & Toni, I. (2009). Recipient design in tacit communication. Cognition, 111, 46-54. doi:10.1016/j.cognition.2008.12.004.

    Abstract

    The ability to design tailored messages for specific listeners is an important aspect of
    human communication. The present study investigates whether a mere belief about an
    addressee’s identity influences the generation and production of a communicative message in
    a novel, non-verbal communication task. Participants were made to believe they were playing a game with a child or an adult partner, while a confederate acted as both child
    and adult partners with matched performance and response times. The participants’ belief
    influenced their behavior, spending longer when interacting with the presumed child
    addressee, but only during communicative portions of the game, i.e. using time as a tool
    to place emphasis on target information. This communicative adaptation attenuated with
    experience, and it was related to personality traits, namely Empathy and Need for Cognition
    measures. Overall, these findings indicate that novel nonverbal communicative interactions
    are selected according to a socio-centric perspective, and they are strongly
    influenced by participants’ traits.
  • Niemi, J., Laine, M., & Järvikivi, J. (2009). Paradigmatic and extraparadigmatic morphology in the mental lexicon: Experimental evidence for a dissociation. The mental lexicon, 4(1), 26-40. doi:10.1075/ml.4.1.02nie.

    Abstract

    The present study discusses psycholinguistic evidence for a difference between paradigmatic and extraparadigmatic morphology by investigating the processing of Finnish inflected and cliticized words. The data are derived from three sources of Finnish: from single-word reading performance in an agrammatic deep dyslectic speaker, as well as from visual lexical decision and wordness/learnability ratings of cliticized vs. inflected items by normal Finnish speakers. The agrammatic speaker showed awareness of the suffixes in multimorphemic words, including clitics, since he attempted to fill in this slot with morphological material. However, he never produced a clitic — either as the correct response or as an error — in any morphological configuration (simplex, derived, inflected, compound). Moreover, he produced more nominative singular errors for case-inflected nouns than he did for the cliticized words, a pattern that is expected if case-inflected forms were closely associated with their lexical heads, i.e., if they were paradigmatic and cliticized words were not. Furthermore, a visual lexical decision task with normal speakers of Finnish, showed an additional processing cost (longer latencies and more errors on cliticized than on case-inflected noun forms). Finally, a rating task indicated no difference in relative wordness between these two types of words. However, the same cliticized words were judged harder to learn as L2 items than the inflected words, most probably due to their conceptual/semantic properties, in other words due to their lack of word-level translation equivalents in SAVE languages. Taken together, the present results suggest that the distinction between paradigmatic and extraparadigmatic morphology is psychologically real.
  • Niermann, H. C. M., Tyborowska, A., Cillessen, A. H. N., Van Donkelaar, M. M. J., Lammertink, F., Gunnar, M. R., Franke, B., Figner, B., & Roelofs, K. (2019). The relation between infant freezing and the development of internalizing symptoms in adolescence: A prospective longitudinal study. Developmental Science, 22(3): e12763. doi:10.1111/desc.12763.

    Abstract

    Given the long-lasting detrimental effects of internalizing symptoms, there is great need for detecting early risk markers. One promising marker is freezing behavior. Whereas initial freezing reactions are essential for coping with threat, prolonged freezing has been associated with internalizing psychopathology. However, it remains unknown whether early life alterations in freezing reactions predict changes in internalizing symptoms during adolescent development. In a longitudinal study (N = 116), we tested prospectively whether observed freezing in infancy predicted the development of internalizing symptoms from childhood through late adolescence (until age 17). Both longer and absent infant freezing behavior during a standard challenge (robot-confrontation task) were associated with internalizing symptoms in adolescence. Specifically, absent infant freezing predicted a relative increase in internalizing symptoms consistently across development from relatively low symptom levels in childhood to relatively high levels in late adolescence. Longer infant freezing also predicted a relative increase in internalizing symptoms, but only up until early adolescence. This latter effect was moderated by peer stress and was followed by a later decrease in internalizing symptoms. The findings suggest that early deviations in defensive freezing responses signal risk for internalizing symptoms and may constitute important markers in future stress vulnerability and resilience studies.
  • Nieuwland, M. S., Coopmans, C. W., & Sommers, R. P. (2019). Distinguishing old from new referents during discourse comprehension: Evidence from ERPs and oscillations. Frontiers in Human Neuroscience, 13: 398. doi:10.3389/fnhum.2019.00398.

    Abstract

    In this EEG study, we used pre-registered and exploratory ERP and time-frequency analyses to investigate the resolution of anaphoric and non-anaphoric noun phrases during discourse comprehension. Participants listened to story contexts that described two antecedents, and subsequently read a target sentence with a critical noun phrase that lexically matched one antecedent (‘old’), matched two antecedents (‘ambiguous’), partially matched one antecedent in terms of semantic features (‘partial-match’), or introduced another referent (non-anaphoric, ‘new’). After each target sentence, participants judged whether the noun referred back to an antecedent (i.e., an ‘old/new’ judgment), which was easiest for ambiguous nouns and hardest for partially matching nouns. The noun-elicited N400 ERP component demonstrated initial sensitivity to repetition and semantic overlap, corresponding to repetition and semantic priming effects, respectively. New and partially matching nouns both elicited a subsequent frontal positivity, which suggested that partially matching anaphors may have been processed as new nouns temporarily. ERPs in an even later time window and ERPs time-locked to sentence-final words suggested that new and partially matching nouns had different effects on comprehension, with partially matching nouns incurring additional processing costs up to the end of the sentence. In contrast to the ERP results, the time-frequency results primarily demonstrated sensitivity to noun repetition, and did not differentiate partially matching anaphors from new nouns. In sum, our results show the ERP and time-frequency effects of referent repetition during discourse comprehension, and demonstrate the potentially demanding nature of establishing the anaphoric meaning of a novel noun.
  • Nieuwland, M. S. (2019). Do ‘early’ brain responses reveal word form prediction during language comprehension? A critical review. Neuroscience and Biobehavioral Reviews, 96, 367-400. doi:10.1016/j.neubiorev.2018.11.019.

    Abstract

    Current theories of language comprehension posit that readers and listeners routinely try to predict the meaning but also the visual or sound form of upcoming words. Whereas
    most neuroimaging studies on word rediction focus on the N400 ERP or its magnetic equivalent, various studies claim that word form prediction manifests itself in ‘early’, pre
    N400 brain responses (e.g., ELAN, M100, P130, N1, P2, N200/PMN, N250). Modulations of these components are often taken as evidence that word form prediction impacts early sensory processes (the sensory hypothesis) or, alternatively, the initial stages of word recognition before word meaning is integrated with sentence context (the recognition hypothesis). Here, I
    comprehensively review studies on sentence- or discourse-level language comprehension that report such effects of prediction on early brain responses. I conclude that the reported evidence for the sensory hypothesis or word recognition hypothesis is weak and inconsistent,
    and highlight the urgent need for replication of previous findings. I discuss the implications and challenges to current theories of linguistic prediction and suggest avenues for future research.
  • Nieuwland, M. S., Arkhipova, Y., & Rodríguez-Gómez, P. (2020). Anticipating words during spoken discourse comprehension: A large-scale, pre-registered replication study using brain potentials. Cortex, 133, 1-36. doi:10.1016/j.cortex.2020.09.007.

    Abstract

    Numerous studies report brain potential evidence for the anticipation of specific words during language comprehension. In the most convincing demonstrations, highly predictable nouns exert an influence on processing even before they appear to a reader or listener, as indicated by the brain's neural response to a prenominal adjective or article when it mismatches the expectations about the upcoming noun. However, recent studies suggest that some well-known demonstrations of prediction may be hard to replicate. This could signal the use of data-contingent analysis, but might also mean that readers and listeners do not always use prediction-relevant information in the way that psycholinguistic theories typically suggest. To shed light on this issue, we performed a close replication of one of the best-cited ERP studies on word anticipation (Van Berkum, Brown, Zwitserlood, Kooijman & Hagoort, 2005; Experiment 1), in which participants listened to Dutch spoken mini-stories. In the original study, the marking of grammatical gender on pre-nominal adjectives (‘groot/grote’) elicited an early positivity when mismatching the gender of an unseen, highly predictable noun, compared to matching gender. The current pre-registered study involved that same manipulation, but used a novel set of materials twice the size of the original set, an increased sample size (N = 187), and Bayesian mixed-effects model analyses that better accounted for known sources of variance than the original. In our study, mismatching gender elicited more negative voltage than matching gender at posterior electrodes. However, this N400-like effect was small in size and lacked support from Bayes Factors. In contrast, we successfully replicated the original's noun effects. While our results yielded some support for prediction, they do not support the Van Berkum et al. effect and highlight the risks associated with commonly employed data-contingent analyses and small sample sizes. Our results also raise the question whether Dutch listeners reliably or consistently use adjectival inflection information to inform their noun predictions.
  • Nieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A. and 3 moreNieuwland, M. S., Barr, D. J., Bartolozzi, F., Busch-Moreno, S., Darley, E., Donaldson, D. I., Ferguson, H. J., Fu, X., Heyselaar, E., Huettig, F., Husband, E. M., Ito, A., Kazanina, N., Kogan, V., Kohút, Z., Kulakova, E., Mézière, D., Politzer-Ahles, S., Rousselet, G., Rueschemeyer, S.-A., Segaert, K., Tuomainen, J., & Von Grebmer Zu Wolfsthurn, S. (2020). Dissociable effects of prediction and integration during language comprehension: Evidence from a large-scale study using brain potentials. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 375: 20180522. doi:10.1098/rstb.2018.0522.

    Abstract

    Composing sentence meaning is easier for predictable words than for unpredictable words. Are predictable words genuinely predicted, or simply more plausible and therefore easier to integrate with sentence context? We addressed this persistent and fundamental question using data from a recent, large-scale (N = 334) replication study, by investigating the effects of word predictability and sentence plausibility on the N400, the brain’s electrophysiological index of semantic processing. A spatiotemporally fine-grained mixed-effects multiple regression analysis revealed overlapping effects of predictability and plausibility on the N400, albeit with distinct spatiotemporal profiles. Our results challenge the view that the predictability-dependent N400 reflects the effects of either prediction or integration, and suggest that semantic facilitation of predictable words arises from a cascade of processes that activate and integrate word meaning with context into a sentence-level meaning.
  • Nieuwland, M. S., & Kazanina, N. (2020). The neural basis of linguistic prediction: Introduction to the special issue. Neuropsychologia, 146: 107532. doi:10.1016/j.neuropsychologia.2020.107532.
  • Nievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B. and 159 moreNievergelt, C. M., Maihofer, A. X., Klengel, T., Atkinson, E. G., Chen, C.-Y., Choi, K. W., Coleman, J. R. I., Dalvie, S., Duncan, L. E., Gelernter, J., Levey, D. F., Logue, M. W., Polimanti, R., Provost, A. C., Ratanatharathorn, A., Stein, M. B., Torres, K., Aiello, A. E., Almli, L. M., Amstadter, A. B., Andersen, S. B., Andreassen, O. A., Arbisi, P. A., Ashley-Koch, A. E., Austin, S. B., Avdibegovic, E., Babić, D., Bækvad-Hansen, M., Baker, D. G., Beckham, J. C., Bierut, L. J., Bisson, J. I., Boks, M. P., Bolger, E. A., Børglum, A. D., Bradley, B., Brashear, M., Breen, G., Bryant, R. A., Bustamante, A. C., Bybjerg-Grauholm, J., Calabrese, J. R., Caldas- de- Almeida, J. M., Dale, A. M., Daly, M. J., Daskalakis, N. P., Deckert, J., Delahanty, D. L., Dennis, M. F., Disner, S. G., Domschke, K., Dzubur-Kulenovic, A., Erbes, C. R., Evans, A., Farrer, L. A., Feeny, N. C., Flory, J. D., Forbes, D., Franz, C. E., Galea, S., Garrett, M. E., Gelaye, B., Geuze, E., Gillespie, C., Uka, A. G., Gordon, S. D., Guffanti, G., Hammamieh, R., Harnal, S., Hauser, M. A., Heath, A. C., Hemmings, S. M. J., Hougaard, D. M., Jakovljevic, M., Jett, M., Johnson, E. O., Jones, I., Jovanovic, T., Qin, X.-J., Junglen, A. G., Karstoft, K.-I., Kaufman, M. L., Kessler, R. C., Khan, A., Kimbrel, N. A., King, A. P., Koen, N., Kranzler, H. R., Kremen, W. S., Lawford, B. R., Lebois, L. A. M., Lewis, C. E., Linnstaedt, S. D., Lori, A., Lugonja, B., Luykx, J. J., Lyons, M. J., Maples-Keller, J., Marmar, C., Martin, A. R., Martin, N. G., Maurer, D., Mavissakalian, M. R., McFarlane, A., McGlinchey, R. E., McLaughlin, K. A., McLean, S. A., McLeay, S., Mehta, D., Milberg, W. P., Miller, M. W., Morey, R. A., Morris, C. P., Mors, O., Mortensen, P. B., Neale, B. M., Nelson, E. C., Nordentoft, M., Norman, S. B., O’Donnell, M., Orcutt, H. K., Panizzon, M. S., Peters, E. S., Peterson, A. L., Peverill, M., Pietrzak, R. H., Polusny, M. A., Rice, J. P., Ripke, S., Risbrough, V. B., Roberts, A. L., Rothbaum, A. O., Rothbaum, B. O., Roy-Byrne, P., Ruggiero, K., Rung, A., Rutten, B. P. F., Saccone, N. L., Sanchez, S. E., Schijven, D., Seedat, S., Seligowski, A. V., Seng, J. S., Sheerin, C. M., Silove, D., Smith, A. K., Smoller, J. W., Sponheim, S. R., Stein, D. J., Stevens, J. S., Sumner, J. A., Teicher, M. H., Thompson, W. K., Trapido, E., Uddin, M., Ursano, R. J., van den Heuvel, L. L., Van Hooff, M., Vermetten, E., Vinkers, C. H., Voisey, J., Wang, Y., Wang, Z., Werge, T., Williams, M. A., Williamson, D. E., Winternitz, S., Wolf, C., Wolf, E. J., Wolff, J. D., Yehuda, R., Young, R. M., Young, K. A., Zhao, H., Zoellner, L. A., Liberzon, I., Ressler, K. J., Haas, M., & Koenen, K. C. (2019). International meta-analysis of PTSD genome-wide association studies identifies sex- and ancestry-specific genetic risk loci. Nature Communications, 10(1): 4558. doi:10.1038/s41467-019-12576-w.

    Abstract

    The risk of posttraumatic stress disorder (PTSD) following trauma is heritable, but robust common variants have yet to be identified. In a multi-ethnic cohort including over 30,000 PTSD cases and 170,000 controls we conduct a genome-wide association study of PTSD. We demonstrate SNP-based heritability estimates of 5–20%, varying by sex. Three genome-wide significant loci are identified, 2 in European and 1 in African-ancestry analyses. Analyses stratified by sex implicate 3 additional loci in men. Along with other novel genes and non-coding RNAs, a Parkinson’s disease gene involved in dopamine regulation, PARK2, is associated with PTSD. Finally, we demonstrate that polygenic risk for PTSD is significantly predictive of re-experiencing symptoms in the Million Veteran Program dataset, although specific loci did not replicate. These results demonstrate the role of genetic variation in the biology of risk for PTSD and highlight the necessity of conducting sex-stratified analyses and expanding GWAS beyond European ancestry populations.

    Additional information

    Supplementary information
  • Nijland, L., & Janse, E. (Eds.). (2009). Auditory processing in speakers with acquired or developmental language disorders [Special Issue]. Clinical Linguistics and Phonetics, 23(3).
  • Noble, C., Cameron-Faulkner, T., Jessop, A., Coates, A., Sawyer, H., Taylor-Ims, R., & Rowland, C. F. (2020). The impact of interactive shared book reading on children's language skills: A randomized controlled trial. Journal of Speech, Language, and Hearing Research, 63(6), 1878-1897. doi:10.1044/2020_JSLHR-19-00288.

    Abstract

    Purpose: Research has indicated that interactive shared
    book reading can support a wide range of early language
    skills and that children who are read to regularly in the early
    years learn language faster, enter school with a larger
    vocabulary, and become more successful readers at school.
    Despite the large volume of research suggesting interactive
    shared reading is beneficial for language development, two
    fundamental issues remain outstanding: whether shared
    book reading interventions are equally effective (a) for children
    from all socioeconomic backgrounds and (b) for a range of
    language skills.
    Method: To address these issues, we conducted a
    randomized controlled trial to investigate the effects of two
    6-week interactive shared reading interventions on a
    range of language skills in children across the socioeconomic
    spectrum. One hundred and fifty children aged between
    2;6 and 3;0 (years;months) were randomly assigned to one

    of three conditions: a pause reading, a dialogic reading, or
    an active shared reading control condition.
    Results: The findings indicated that the interventions were
    effective at changing caregiver reading behaviors. However,
    the interventions did not boost children’s language skills
    over and above the effect of an active reading control
    condition. There were also no effects of socioeconomic status.
    Conclusion: This randomized controlled trial showed
    that caregivers from all socioeconomic backgrounds
    successfully adopted an interactive shared reading style.
    However, while the interventions were effective at increasing
    caregivers’ use of interactive shared book reading behaviors,
    this did not have a significant impact on the children’s
    language skills. The findings are discussed in terms of
    practical implications and future research.

    Additional information

    Supplemental Material
  • Noble, C., Sala, G., Peter, M., Lingwood, J., Rowland, C. F., Gobet, F., & Pine, J. (2019). The impact of shared book reading on children's language skills: A meta-analysis. Educational Research Review, 28: 100290. doi:10.1016/j.edurev.2019.100290.

    Abstract

    Shared book reading is thought to have a positive impact on young children's language development, with shared reading interventions often run in an attempt to boost children's language skills. However, despite the volume of research in this area, a number of issues remain outstanding. The current meta-analysis explored whether shared reading interventions are equally effective (a) across a range of study designs; (b) across a range of different outcome variables; and (c) for children from different SES groups. It also explored the potentially moderating effects of intervention duration, child age, use of dialogic reading techniques, person delivering the intervention and mode of intervention delivery.

    Our results show that, while there is an effect of shared reading on language development, this effect is smaller than reported in previous meta-analyses (
     = 0.194, p = .002). They also show that this effect is moderated by the type of control group used and is negligible in studies with active control groups (  = 0.028, p = .703). Finally, they show no significant effects of differences in outcome variable (ps ≥ .286), socio-economic status (p = .658), or any of our other potential moderators (ps ≥ .077), and non-significant effects for studies with follow-ups (  = 0.139, p = .200). On the basis of these results, we make a number of recommendations for researchers and educators about the design and implementation of future shared reading interventions.

    Additional information

    Supplementary data
  • Noordzij, M., Newman-Norlund, S. E., De Ruiter, J. P., Hagoort, P., Levinson, S. C., & Toni, I. (2009). Brain mechanisms underlying human communication. Frontiers in Human Neuroscience, 3:14. doi:10.3389/neuro.09.014.2009.

    Abstract

    Human communication has been described as involving the coding-decoding of a conventional symbol system, which could be supported by parts of the human motor system (i.e. the “mirror neurons system”). However, this view does not explain how these conventions could develop in the first place. Here we target the neglected but crucial issue of how people organize their non-verbal behavior to communicate a given intention without pre-established conventions. We have measured behavioral and brain responses in pairs of subjects during communicative exchanges occurring in a real, interactive, on-line social context. In two fMRI studies, we found robust evidence that planning new communicative actions (by a sender) and recognizing the communicative intention of the same actions (by a receiver) relied on spatially overlapping portions of their brains (the right posterior superior temporal sulcus). The response of this region was lateralized to the right hemisphere, modulated by the ambiguity in meaning of the communicative acts, but not by their sensorimotor complexity. These results indicate that the sender of a communicative signal uses his own intention recognition system to make a prediction of the intention recognition performed by the receiver. This finding supports the notion that our communicative abilities are distinct from both sensorimotor processes and language abilities.
  • Nuthmann, A., De Groot, F., Huettig, F., & Olivers, C. L. N. (2019). Extrafoveal attentional capture by object semantics. PLoS One, 14(5): e0217051. doi:10.1371/journal.pone.0217051.

    Abstract

    There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
  • Obleser, J., & Eisner, F. (2009). Pre-lexical abstraction of speech in the auditory cortex. Trends in Cognitive Sciences, 13, 14-19. doi:10.1016/j.tics.2008.09.005.

    Abstract

    Speech perception requires the decoding of complex acoustic patterns. According to most cognitive models of spoken word recognition, this complexity is dealt with before lexical access via a process of abstraction from the acoustic signal to pre-lexical categories. It is currently unclear how these categories are implemented in the auditory cortex. Recent advances in animal neurophysiology and human functional imaging have made it possible to investigate the processing of speech in terms of probabilistic cortical maps rather than simple cognitive subtraction, which will enable us to relate neurometric data more directly to behavioural studies. We suggest that integration of insights from cognitive science, neurophysiology and functional imaging is necessary for furthering our understanding of pre-lexical abstraction in the cortex.

    Files private

    Request files
  • Ogasawara, N., & Warner, N. (2009). Processing missing vowels: Allophonic processing in Japanese. Language and Cognitive Processes, 24, 376 -411. doi:10.1080/01690960802084028.

    Abstract

    The acoustic realisation of a speech sound varies, often showing allophonic variation triggered by surrounding sounds. Listeners recognise words and sounds well despite such variation, and even make use of allophonic variability in processing. This study reports five experiments on processing of the reduced/unreduced allophonic alternation of Japanese high vowels. The results show that listeners use phonological knowledge of their native language during phoneme processing and word recognition. However, interactions of the phonological and acoustic effects differ in these two processes. A facilitatory phonological effect and an inhibitory acoustic effect cancel one another out in phoneme processing; while in word recognition, the facilitatory phonological effect overrides the inhibitory acoustic effect. Four potential models of the processing of allophonic variation are discussed. The results can be accommodated in two of them, but require additional assumptions or modifications to the models, and primarily support lexical specification of allophonic variability.

    Files private

    Request files
  • Ohlerth, A.-K., Valentin, A., Vergani, F., Ashkan, K., & Bastiaanse, R. (2020). The verb and noun test for peri-operative testing (VAN-POP): Standardized language tests for navigated transcranial magnetic stimulation and direct electrical stimulation. Acta Neurochirurgica, (2), 397-406. doi:10.1007/s00701-019-04159-x.

    Abstract

    Background

    Protocols for intraoperative language mapping with direct electrical stimulation (DES) often include various language tasks triggering both nouns and verbs in sentences. Such protocols are not readily available for navigated transcranial magnetic stimulation (nTMS), where only single word object naming is generally used. Here, we present the development, norming, and standardization of the verb and noun test for peri-operative testing (VAN-POP) that measures language skills more extensively.
    Methods

    The VAN-POP tests noun and verb retrieval in sentence context. Items are marked and balanced for several linguistic factors known to influence word retrieval. The VAN-POP was administered in English, German, and Dutch under conditions that are used for nTMS and DES paradigms. For each language, 30 speakers were tested.
    Results

    At least 50 items per task per language were named fluently and reached a high naming agreement.
    Conclusion

    The protocol proved to be suitable for pre- and intraoperative language mapping with nTMS and DES.
  • O’Meara, C., Kung, S. S., & Majid, A. (2019). The challenge of olfactory ideophones: Reconsidering ineffability from the Totonac-Tepehua perspective. International Journal of American Linguistics, 85(2), 173-212. doi:10.1086/701801.

    Abstract

    Olfactory impressions are said to be ineffable, but little systematic exploration has been done to substantiate this. We explored olfactory language in Huehuetla Tepehua—a Totonac-Tepehua language spoken in Hidalgo, Mexico—which has a large inventory of ideophones, words with sound-symbolic properties used to describe perceptuomotor experiences. A multi-method study found Huehuetla Tepehua has 45 olfactory ideophones, illustrating intriguing sound-symbolic alternation patterns. Elaboration in the olfactory domain is not unique to this language; related Totonac-Tepehua languages also have impressive smell lexicons. Comparison across these languages shows olfactory and gustatory terms overlap in interesting ways, mirroring the physiology of smelling and tasting. However, although cognate taste terms are formally similar, olfactory terms are less so. We suggest the relative instability of smell vocabulary in comparison with those of taste likely results from the more varied olfactory experiences caused by the mutability of smells in different environments.
  • Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302-315. doi:10.3758/MC.37.3.302.

    Abstract

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations—that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.
  • Ortega, G., Schiefner, A., & Ozyurek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to sign. Cognition, 191: 103996. doi:10.1016/j.cognition.2019.06.008.

    Abstract

    The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of ‘manual cognates’ that help non-signing adults to break into a new language at first exposure.

    Additional information

    Supplementary Materials
  • Ortega, G., Ozyurek, A., & Peeters, D. (2020). Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 403-415. doi:10.1037/xlm0000729.

    Abstract

    When learning a second spoken language, cognates, words overlapping in form and meaning with one’s native language, help breaking into the language one wishes to acquire. But what happens when the to-be-acquired second language is a sign language? We tested whether hearing nonsigners rely on their gestural repertoire at first exposure to a sign language. Participants saw iconic signs with high and low overlap with the form of iconic gestures while electrophysiological brain activity was recorded. Upon first exposure, signs with low overlap with gestures elicited enhanced positive amplitude in the P3a component compared to signs with high overlap. This effect disappeared after a training session. We conclude that nonsigners generate expectations about the form of iconic signs never seen before based on their implicit knowledge of gestures, even without having to produce them. Learners thus draw from any available semiotic resources when acquiring a second language, and not only from their linguistic experience
  • Ortega, G., & Ozyurek, A. (2020). Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behavior Research Methods, 52, 51-67. doi:10.3758/s13428-019-01204-6.

    Abstract

    An unprecedented number of empirical studies have shown that iconic gestures—those that mimic the sensorimotor attributes of a referent—contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture–meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture’s mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
  • Ortega, G., & Ozyurek, A. (2020). Types of iconicity and combinatorial strategies distinguish semantic categories in silent gesture. Language and Cognition, 12(1), 84-113. doi:10.1017/langcog.2019.28.

    Abstract

    In this study we explore whether different types of iconic gestures
    (i.e., acting, drawing, representing) and their combinations are used
    systematically to distinguish between different semantic categories in
    production and comprehension. In Study 1, we elicited silent gestures
    from Mexican and Dutch participants to represent concepts from three
    semantic categories: actions, manipulable objects, and non-manipulable
    objects. Both groups favoured the acting strategy to represent actions and
    manipulable objects; while non-manipulable objects were represented
    through the drawing strategy. Actions elicited primarily single gestures
    whereas objects elicited combinations of different types of iconic gestures
    as well as pointing. In Study 2, a different group of participants were
    shown gestures from Study 1 and were asked to guess their meaning.
    Single-gesture depictions for actions were more accurately guessed than
    for objects. Objects represented through two-gesture combinations (e.g.,
    acting + drawing) were more accurately guessed than objects represented
    with a single gesture. We suggest iconicity is exploited to make direct
    links with a referent, but when it lends itself to ambiguity, individuals
    resort to combinatorial structures to clarify the intended referent.
    Iconicity and the need to communicate a clear signal shape the structure
    of silent gestures and this in turn supports comprehension.
  • Ostarek, M., Joosen, D., Ishag, A., De Nijs, M., & Huettig, F. (2019). Are visual processes causally involved in “perceptual simulation” effects in the sentence-picture verification task? Cognition, 182, 84-94. doi:10.1016/j.cognition.2018.08.017.

    Abstract

    Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
  • Ostarek, M., Van Paridon, J., & Montero-Melis, G. (2019). Sighted people’s language is not helpful for blind individuals’ acquisition of typical animal colors. Proceedings of the National Academy of Sciences of the United States of America, 116(44), 21972-21973. doi:10.1073/pnas.1912302116.
  • Ostarek, M., & Huettig, F. (2019). Six challenges for embodiment research. Current Directions in Psychological Science, 28(6), 593-599. doi:10.1177/0963721419866441.

    Abstract

    20 years after Barsalou's seminal perceptual symbols paper (Barsalou, 1999), embodied cognition, the notion that cognition involves simulations of sensory, motor, or affective states, has moved in status from an outlandish proposal advanced by a fringe movement in psychology to a mainstream position adopted by large numbers of researchers in the psychological and cognitive (neuro)sciences. While it has generated highly productive work in the cognitive sciences as a whole, it had a particularly strong impact on research into language comprehension. The view of a mental lexicon based on symbolic word representations, which are arbitrarily linked to sensory aspects of their referents, for example, was generally accepted since the cognitive revolution in the 1950s. This has radically changed. Given the current status of embodiment as a main theory of cognition, it is somewhat surprising that a close look at the state of the affairs in the literature reveals that the debate about the nature of the processes involved in language comprehension is far from settled and key questions remain unanswered. We present several suggestions for a productive way forward.
  • Otten, M., & Van Berkum, J. J. A. (2009). Does working memory capacity affect the ability to predict upcoming words in discourse? Brain Research, 1291, 92-101. doi:doi:10.1016/j.brainres.2009.07.042.

    Abstract

    Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive ‘prime control’ control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300–600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900–1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.
  • Peeters, D. (2020). Bilingual switching between languages and listeners: Insights from immersive virtual reality. Cognition, 195: 104107. doi:10.1016/j.cognition.2019.104107.

    Abstract

    Perhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.

    Additional information

    Supplementary data
  • Peeters, D., Vanlangendonck, F., Rüschemeyer, S.-A., & Dijkstra, T. (2019). Activation of the language control network in bilingual visual word recognition. Cortex, 111, 63-73. doi:10.1016/j.cortex.2018.10.012.

    Abstract

    Research into bilingual language production has identified a language control network that subserves control operations when bilinguals produce speech. Here we explore which brain areas are recruited for control purposes in bilingual language comprehension. In two experimental fMRI sessions, Dutch-English unbalanced bilinguals read words that differed in cross-linguistic form and meaning overlap across their two languages. The need for control operations was further manipulated by varying stimulus list composition across the two experimental sessions. We observed activation of the language control network in bilingual language comprehension as a function of both cross-linguistic form and meaning overlap and stimulus list composition. These findings suggest that the language control network is shared across bilingual language production and comprehension. We argue that activation of the language control network in language comprehension allows bilinguals to quickly and efficiently grasp the context-relevant meaning of words.

    Additional information

    1-s2.0-S0010945218303459-mmc1.docx
  • Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26(3), 894-900. doi:10.3758/s13423-019-01571-3.

    Abstract

    This paper introduces virtual reality as an experimental method for the language sciences and provides a review of recent studies using the method to answer fundamental, psycholinguistic research questions. It is argued that virtual reality demonstrates that ecological validity and
    experimental control should not be conceived of as two extremes on a continuum, but rather as two orthogonal factors. Benefits of using virtual reality as an experimental method include that in a virtual environment, as in the real world, there is no artificial spatial divide between participant and stimulus. Moreover, virtual reality experiments do not necessarily have to include a repetitive trial structure or an unnatural experimental task. Virtual agents outperform experimental confederates in terms of the consistency and replicability of their behaviour, allowing for reproducible science across participants and research labs. The main promise of virtual reality as a tool for the experimental language sciences, however, is that it shifts theoretical focus towards the interplay between different modalities (e.g., speech, gesture, eye gaze, facial expressions) in dynamic and communicative real-world environments, complementing studies that focus on one modality (e.g. speech) in isolation.
  • Persson, J., Szalisznyó, K., Antoni, G., Wall, A., Fällmar, D., Zora, H., & Bodén, R. (2020). Phosphodiesterase 10A levels are related to striatal function in schizophrenia: a combined positron emission tomography and functional magnetic resonance imaging study. European Archives of Psychiatry and Clinical Neuroscience, 270(4), 451-459. doi:10.1007/s00406-019-01021-0.

    Abstract

    Pharmacological inhibition of phosphodiesterase 10A (PDE10A) is being investigated as a treatment option in schizophrenia. PDE10A acts postsynaptically on striatal dopamine signaling by regulating neuronal excitability through its inhibition of cyclic adenosine monophosphate (cAMP), and we recently found it to be reduced in schizophrenia compared to controls. Here, this finding of reduced PDE10A in schizophrenia was followed up in the same sample to investigate the effect of reduced striatal PDE10A on the neural and behavioral function of striatal and downstream basal ganglia regions. A positron emission tomography (PET) scan with the PDE10A ligand [11C]Lu AE92686 was performed, followed by a 6 min resting-state magnetic resonance imaging (MRI) scan in ten patients with schizophrenia. To assess the relationship between striatal function and neurophysiological and behavioral functioning, salience processing was assessed using a mismatch negativity paradigm, an auditory event-related electroencephalographic measure, episodic memory was assessed using the Rey auditory verbal learning test (RAVLT) and executive functioning using trail-making test B. Reduced striatal PDE10A was associated with increased amplitude of low-frequency fluctuations (ALFF) within the putamen and substantia nigra, respectively. Higher ALFF in the substantia nigra, in turn, was associated with lower episodic memory performance. The findings are in line with a role for PDE10A in striatal functioning, and suggest that reduced striatal PDE10A may contribute to cognitive symptoms in schizophrenia.
  • Peter, M. S., & Rowland, C. F. (2019). Aligning developmental and processing accounts of implicit and statistical learning. Topics in Cognitive Science, 11, 555-572. doi:10.1111/tops.12396.

    Abstract

    A long‐standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely accepted view is that this process involves extracting distributional regularities from the environment in a manner that is incidental and happens, for the most part, without the learner's awareness. In this way, the debate speaks to two associated but separate literatures in language acquisition: statistical learning and implicit learning. Both fields have explored this issue in some depth but, at present, neither the results from the infant studies used by the statistical learning literature nor the artificial grammar learning tasks studies from the implicit learning literature can be used to fully explain how children's syntax becomes adult‐like. In this work, we consider an alternative explanation—that children use error‐based learning to become mature syntax users. We discuss this proposal in the light of the behavioral findings from structural priming studies and the computational findings from Chang, Dell, and Bock's (2006) dual‐path model, which incorporates properties from both statistical and implicit learning, and offers an explanation for syntax learning and structural priming using a common error‐based learning mechanism. We then turn our attention to future directions for the field, here suggesting how structural priming might inform the statistical learning and implicit learning literature on the nature of the learning mechanism.
  • Peter, M. S., Durrant, S., Jessop, A., Bidgood, A., Pine, J. M., & Rowland, C. F. (2019). Does speed of processing or vocabulary size predict later language growth in toddlers? Cognitive Psychology, 115: 101238. doi:10.1016/j.cogpsych.2019.101238.

    Abstract

    It is becoming increasingly clear that the way that children acquire cognitive representations
    depends critically on how their processing system is developing. In particular, recent studies
    suggest that individual differences in language processing speed play an important role in explaining
    the speed with which children acquire language. Inconsistencies across studies, however,
    mean that it is not clear whether this relationship is causal or correlational, whether it is
    present right across development, or whether it extends beyond word learning to affect other
    aspects of language learning, like syntax acquisition. To address these issues, the current study
    used the looking-while-listening paradigm devised by Fernald, Swingley, and Pinto (2001) to test
    the speed with which a large longitudinal cohort of children (the Language 0–5 Project) processed
    language at 19, 25, and 31 months of age, and took multiple measures of vocabulary (UKCDI,
    Lincoln CDI, CDI-III) and syntax (Lincoln CDI) between 8 and 37 months of age. Processing
    speed correlated with vocabulary size - though this relationship changed over time, and was
    observed only when there was variation in how well the items used in the looking-while-listening
    task were known. Fast processing speed was a positive predictor of subsequent vocabulary
    growth, but only for children with smaller vocabularies. Faster processing speed did, however,
    predict faster syntactic growth across the whole sample, even when controlling for concurrent
    vocabulary. The results indicate a relatively direct relationship between processing speed and
    syntactic development, but point to a more complex interaction between processing speed, vocabulary
    size and subsequent vocabulary growth.
  • Petras, K., Ten Oever, S., Jacobs, C., & Goffaux, V. (2019). Coarse-to-fine information integration in human vision. NeuroImage, 186, 103-112. doi:10.1016/j.neuroimage.2018.10.086.

    Abstract

    Coarse-to-fine theories of vision propose that the coarse information carried by the low spatial frequencies (LSF) of visual input guides the integration of finer, high spatial frequency (HSF) detail. Whether and how LSF modulates HSF processing in naturalistic broad-band stimuli is still unclear. Here we used multivariate decoding of EEG signals to separate the respective contribution of LSF and HSF to the neural response evoked by broad-band images. Participants viewed images of human faces, monkey faces and phase-scrambled versions that were either broad-band or filtered to contain LSF or HSF. We trained classifiers on EEG scalp-patterns evoked by filtered scrambled stimuli and evaluated the derived models on broad-band scrambled and intact trials. We found reduced HSF contribution when LSF was informative towards image content, indicating that coarse information does guide the processing of fine detail, in line with coarse-to-fine theories. We discuss the potential cortical mechanisms underlying such coarse-to-fine feedback.

    Additional information

    Supplementary figures
  • Pijls, F., & Kempen, G. (1986). Een psycholinguïstisch model voor grammatische samentrekking. De Nieuwe Taalgids, 79, 217-234.
  • Pijls, F., Daelemans, W., & Kempen, G. (1987). Artificial intelligence tools for grammar and spelling instruction. Instructional Science, 16(4), 319-336. doi:10.1007/BF00117750.

    Abstract

    In The Netherlands, grammar teaching is an especially important subject in the curriculum of children aged 10-15 for several reasons. However, in spite of all attention and time invested, the results are poor. This article describes the problems and our attempt to overcome them by developing an intelligent computational instructional environment consisting of: a linguistic expert system, containing a module representing grammar and spelling rules and a number of modules to manipulate these rules; a didactic module; and a student interface with special facilities for grammar and spelling. Three prototypes of the functionality are discussed: BOUWSTEEN and COGO, which are programs for constructing and analyzing Dutch sentences; and TDTDT, a program for the conjugation of Dutch verbs.
  • Pijls, F., & Kempen, G. (1987). Kennistechnologische leermiddelen in het grammatica- en spellingonderwijs. Nederlands Tijdschrift voor de Psychologie, 42, 354-363.
  • Pijnacker, J., Geurts, B., Van Lambalgen, M., Kan, C. C., Buitelaar, J. K., & Hagoort, P. (2009). Defeasible reasoning in high-functioning adults with autism: Evidence for impaired exception-handling. Neuropsychologia, 47, 644-651. doi:10.1016/j.neuropsychologia.2008.11.011.

    Abstract

    While autism is one of the most intensively researched psychiatric disorders, little is known about reasoning skills of people with autism. The focus of this study was on defeasible inferences, that is inferences that can be revised in the light of new information. We used a behavioral task to investigate (a) conditional reasoning and (b) the suppression of conditional inferences in high-functioning adults with autism. In the suppression task a possible exception was made salient which could prevent a conclusion from being drawn. We predicted that the autism group would have difficulties dealing with such exceptions because they require mental flexibility to adjust to the context, which is often impaired in autism. The findings confirm our hypothesis that high-functioning adults with autism have a specific difficulty with exception-handling during reasoning. It is suggested that defeasible reasoning is also involved in other cognitive domains. Implications for neural underpinnings of reasoning and autism are discussed.
  • Pijnacker, J., Hagoort, P., Buitelaar, J., Teunisse, J.-P., & Geurts, B. (2009). Pragmatic inferences in high-functioning adults with autism and Asperger syndrome. Journal of Autism and Developmental Disorders, 39(4), 607-618. doi:10.1007/s10803-008-0661-8.

    Abstract

    Although people with autism spectrum disorders (ASD) often have severe problems with pragmatic aspects of language, little is known about their pragmatic reasoning. We carried out a behavioral study on highfunctioning adults with autistic disorder (n = 11) and Asperger syndrome (n = 17) and matched controls (n = 28) to investigate whether they are capable of deriving scalar implicatures, which are generally considered to be pragmatic inferences. Participants were presented with underinformative sentences like ‘‘Some sparrows are birds’’. This sentence is logically true, but pragmatically inappropriate if the scalar implicature ‘‘Not all sparrows are birds’’ is derived. The present findings indicate that the combined ASD group was just as likely as controls to derive scalar implicatures, yet there was a difference between participants with autistic disorder and Asperger syndrome, suggesting a potential differentiation between these disorders in pragmatic reasoning. Moreover, our results suggest that verbal intelligence is a constraint for task performance in autistic disorder but not in Asperger syndrome.
  • Poletiek, F. H., & Van Schijndel, T. J. P. (2009). Stimulus set size and statistical coverage of the grammar in artificial grammar learning. Psychonomic Bulletin & Review, 16(6), 1058-1064. doi:10.3758/PBR.16.6.1058.

    Abstract

    Adults and children acquire knowledge of the structure of their environment on the basis of repeated exposure to samples of structured stimuli. In the study of inductive learning, a straightforward issue is how much sample information is needed to learn the structure. The present study distinguishes between two measures for the amount of information in the sample: set size and the extent to which the set of exemplars statistically covers the underlying structure. In an artificial grammar learning experiment, learning was affected by the sample’s statistical coverage of the grammar, but not by its mere size. Our result suggests an alternative explanation of the set size effects on learning found in previous studies (McAndrews & Moscovitch, 1985; Meulemans & Van der Linden, 1997), because, as we argue, set size was confounded with statistical coverage in these studies. nt]mis|This research was supported by a grant from the Netherlands Organization for Scientific Research. We thank Jarry Porsius for his help with the data analyses.
  • Poletiek, F. H. (2009). Popper's Severity of Test as an intuitive probabilistic model of hypothesis testing. Behavioral and Brain Sciences, 32(1), 99-100. doi:10.1017/S0140525X09000454.
  • Poletiek, F. H., & Wolters, G. (2009). What is learned about fragments in artificial grammar learning? A transitional probabilities approach. Quarterly Journal of Experimental Psychology, 62(5), 868-876. doi:10.1080/17470210802511188.

    Abstract

    Learning local regularities in sequentially structured materials is typically assumed to be based on encoding of the frequencies of these regularities. We explore the view that transitional probabilities between elements of chunks, rather than frequencies of chunks, may be the primary factor in artificial grammar learning (AGL). The transitional probability model (TPM) that we propose is argued to provide an adaptive and parsimonious strategy for encoding local regularities in order to induce sequential structure from an input set of exemplars of the grammar. In a variant of the AGL procedure, in which participants estimated the frequencies of bigrams occurring in a set of exemplars they had been exposed to previously, participants were shown to be more sensitive to local transitional probability information than to mere pattern frequencies.
  • Poort, E. D., & Rodd, J. M. (2019). A database of Dutch–English cognates, interlingual homographs and translation equivalents. Journal of Cognition, 2(1): 15. doi:10.5334/joc.67.

    Abstract

    To investigate the structure of the bilingual mental lexicon, researchers in the field of bilingualism often use words that exist in multiple languages: cognates (which have the same meaning) and interlingual homographs (which have a different meaning). A high proportion of these studies have investigated language processing in Dutch–English bilinguals. Despite the abundance of research using such materials, few studies exist that have validated such materials. We conducted two rating experiments in which Dutch–English bilinguals rated the meaning, spelling and pronunciation similarity of pairs of Dutch and English words. On the basis of these results, we present a new database of Dutch–English identical cognates (e.g. “wolf”–“wolf”; n = 58), non-identical cognates (e.g. “kat”–“cat”; n = 74), interlingual homographs (e.g. “angel”–“angel”; n = 72) and translation equivalents (e.g. “wortel”–“carrot”; n = 78). The database can be accessed at http://osf.io/tcdxb/.

    Additional information

    database
  • Poort, E. D., & Rodd, J. M. (2019). Towards a distributed connectionist account of cognates and interlingual homographs: Evidence from semantic relatedness tasks. PeerJ, 7: e6725. doi:10.7717/peerj.6725.

    Abstract

    Background

    Current models of how bilinguals process cognates (e.g., “wolf”, which has the same meaning in Dutch and English) and interlingual homographs (e.g., “angel”, meaning “insect’s sting” in Dutch) are based primarily on data from lexical decision tasks. A major drawback of such tasks is that it is difficult—if not impossible—to separate processes that occur during decision making (e.g., response competition) from processes that take place in the lexicon (e.g., lateral inhibition). Instead, we conducted two English semantic relatedness judgement experiments.
    Methods

    In Experiment 1, highly proficient Dutch–English bilinguals (N = 29) and English monolinguals (N = 30) judged the semantic relatedness of word pairs that included a cognate (e.g., “wolf”–“howl”; n = 50), an interlingual homograph (e.g., “angel”–“heaven”; n = 50) or an English control word (e.g., “carrot”–“vegetable”; n = 50). In Experiment 2, another group of highly proficient Dutch–English bilinguals (N = 101) read sentences in Dutch that contained one of those cognates, interlingual homographs or the Dutch translation of one of the English control words (e.g., “wortel” for “carrot”) approximately 15 minutes prior to completing the English semantic relatedness task.
    Results

    In Experiment 1, there was an interlingual homograph inhibition effect of 39 ms only for the bilinguals, but no evidence for a cognate facilitation effect. Experiment 2 replicated these findings and also revealed that cross-lingual long-term priming had an opposite effect on the cognates and interlingual homographs: recent experience with a cognate in Dutch speeded processing of those items 15 minutes later in English but slowed processing of interlingual homographs. However, these priming effects were smaller than previously observed using a lexical decision task.
    Conclusion

    After comparing our results to studies in both the bilingual and monolingual domain, we argue that bilinguals appear to process cognates and interlingual homographs as monolinguals process polysemes and homonyms, respectively. In the monolingual domain, processing of such words is best modelled using distributed connectionist frameworks. We conclude that it is necessary to explore the viability of such a model for the bilingual case.
  • Postema, M., De Marco, M., Colato, E., & Venneri, A. (2019). A study of within-subject reliability of the brain’s default-mode network. Magnetic Resonance Materials in Physics, Biology and Medicine, 32(3), 391-405. doi:10.1007/s10334-018-00732-0.

    Abstract

    Objective

    Resting-state functional magnetic resonance imaging (fMRI) is promising for Alzheimer’s disease (AD). This study aimed to examine short-term reliability of the default-mode network (DMN), one of the main haemodynamic patterns of the brain.
    Materials and methods

    Using a 1.5 T Philips Achieva scanner, two consecutive resting-state fMRI runs were acquired on 69 healthy adults, 62 patients with mild cognitive impairment (MCI) due to AD, and 28 patients with AD dementia. The anterior and posterior DMN and, as control, the visual-processing network (VPN) were computed using two different methodologies: connectivity of predetermined seeds (theory-driven) and dual regression (data-driven). Divergence and convergence in network strength and topography were calculated with paired t tests, global correlation coefficients, voxel-based correlation maps, and indices of reliability.
    Results

    No topographical differences were found in any of the networks. High correlations and reliability were found in the posterior DMN of healthy adults and MCI patients. Lower reliability was found in the anterior DMN and in the VPN, and in the posterior DMN of dementia patients.
    Discussion

    Strength and topography of the posterior DMN appear relatively stable and reliable over a short-term period of acquisition but with some degree of variability across clinical samples.
  • Postema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X. and 38 morePostema, M., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto Filho, G., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Di Martino, A., Dinstein, I., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Feng, X., Fitzgerald, J., Floris, D. L., Freitag, C. M., Gallagher, L., Glahn, D. C., Gori, I., Haar, S., Hoekstra, L., Jahanshad, N., Jalbrzikowski, M., Janssen, J., King, J. A., Kong, X., Lazaro, L., Lerch, J. P., Luna, B., Martinho, M. M., McGrath, J., Medland, S. E., Muratori, F., Murphy, C. M., Murphy, D. G. M., O'Hearn, K., Oranje, B., Parellada, M., Puig, O., Retico, A., Rosa, P., Rubia, K., Shook, D., Taylor, M., Tosetti, M., Wallace, G. L., Zhou, F., Thompson, P., Fisher, S. E., Buitelaar, J. K., & Francks, C. (2019). Altered structural brain asymmetry in autism spectrum disorder in a study of 54 datasets. Nature Communications, 10: 4958. doi:10.1038/s41467-019-13005-8.
  • Postema, M., Carrion Castillo, A., Fisher, S. E., Vingerhoets, G., & Francks, C. (2020). The genetics of situs inversus without primary ciliary dyskinesia. Scientific Reports, 10: 3677. doi:10.1038/s41598-020-60589-z.

    Abstract

    Situs inversus (SI), a left-right mirror reversal of the visceral organs, can occur with recessive Primary Ciliary Dyskinesia (PCD). However, most people with SI do not have PCD, and the etiology of their condition remains poorly studied. We sequenced the genomes of 15 people with SI, of which six had PCD, as well as 15 controls. Subjects with non-PCD SI in this sample had an elevated rate of left-handedness (five out of nine), which suggested possible developmental mechanisms linking brain and body laterality. The six SI subjects with PCD all had likely recessive mutations in genes already known to cause PCD. Two non-PCD SI cases also had recessive mutations in known PCD genes, suggesting reduced penetrance for PCD in some SI cases. One non-PCD SI case had recessive mutations in PKD1L1, and another in CFAP52 (also known as WDR16). Both of these genes have previously been linked to SI without PCD. However, five of the nine non-PCD SI cases, including three of the left-handers in this dataset, had no obvious monogenic basis for their condition. Environmental influences, or possible random effects in early development, must be considered.

    Additional information

    Supplementary information
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Reply to Ravignani and Kotz: Physical impulses from upper-limb movements impact the respiratory–vocal system. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23225-23226. doi:10.1073/pnas.2015452117.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2020). Acoustic information about upper limb movement in voicing. Proceedings of the National Academy of Sciences of the United States of America, 117(21), 11364-11367. doi:10.1073/pnas.2004163117.

    Abstract

    We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear but not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates
    recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.
  • Pouw, W., Wassenburg, S. I., Hostetter, A. B., De Koning, B. B., & Paas, F. (2020). Does gesture strengthen sensorimotor knowledge of objects? The case of the size-weight illusion. Psychological Research, 84(4), 966-980. doi:10.1007/s00426-018-1128-y.

    Abstract

    Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects’ weight and manipulability.
    This pre-registered study (https ://www.osf.io/9uh6q /) was designed to explore how gestures affect memory for sensorimotor
    information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced
    as different in weight). With this paradigm, a discrepancy can be induced between participants’ conscious illusory
    perception of objects’ weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on
    whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase
    the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large
    objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture
    or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding
    them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we
    found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants’
    heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of
    the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed
    by the agent’s conscious renderings about the actions they describe, rather than implicit motor routines.
  • Pouw, W., Harrison, S. J., Esteve-Gibert, N., & Dixon, J. A. (2020). Energy flows in gesture-speech physics: The respiratory-vocal system and its coupling with hand gestures. The Journal of the Acoustical Society of America, 148(3): 1231. doi:10.1121/10.0001730.

    Abstract

    Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.
    ACKNOWLEDGMENTS

    Additional information

    Link to Preprint on OSF
  • Pouw, W., & Dixon, J. A. (2019). Entrainment and modulation of gesture-speech synchrony under delayed auditory feedback. Cognitive Science, 43(3): e12721. doi:10.1111/cogs.12721.

    Abstract

    Gesture–speech synchrony re-stabilizes when hand movement or speech is disrupted by a delayed
    feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it
    has also been argued from case studies in perceptual–motor pathology that hand gestures are a special
    kind of action that does not require closed-loop re-afferent feedback to maintain synchrony with
    speech. In the current pre-registered within-subject study, we used motion tracking to conceptually
    replicate McNeill’s (1992) classic study on gesture–speech synchrony under normal and 150 ms
    delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending
    McNeill’s original results, we obtain evidence that (a) gesture-speech synchrony is more stable
    under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably
    entrain to the external auditory delay as indicated by a consistent shift in gesture-speech synchrony
    offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are codependent.
    We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive
    system to stabilize rhythmic activity under interfering conditions.

    Additional information

    https://osf.io/pcde3/
  • Pouw, W., & Dixon, J. A. (2020). Gesture networks: Introducing dynamic time warping and network analysis for the kinematic study of gesture ensembles. Discourse Processes, 57(4), 301-319. doi:10.1080/0163853X.2019.1678967.

    Abstract

    We introduce applications of established methods in time-series and network
    analysis that we jointly apply here for the kinematic study of gesture
    ensembles. We define a gesture ensemble as the set of gestures produced
    during discourse by a single person or a group of persons. Here we are
    interested in how gestures kinematically relate to one another. We use
    a bivariate time-series analysis called dynamic time warping to assess how
    similar each gesture is to other gestures in the ensemble in terms of their
    velocity profiles (as well as studying multivariate cases with gesture velocity
    and speech amplitude envelope profiles). By relating each gesture event to
    all other gesture events produced in the ensemble, we obtain a weighted
    matrix that essentially represents a network of similarity relationships. We
    can therefore apply network analysis that can gauge, for example, how
    diverse or coherent certain gestures are with respect to the gesture ensemble.
    We believe these analyses promise to be of great value for gesture
    studies, as we can come to understand how low-level gesture features
    (kinematics of gesture) relate to the higher-order organizational structures
    present at the level of discourse.

    Additional information

    Open Data OSF
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2020). Gesture–speech physics: The biomechanical basis for the emergence of gesture–speech synchrony. Journal of Experimental Psychology: General, 149(2), 391-404. doi:10.1037/xge0000646.

    Abstract

    The phenomenon of gesture–speech synchrony involves tight coupling of prosodic contrasts in gesture
    movement (e.g., peak velocity) and speech (e.g., peaks in fundamental frequency; F0). Gesture–speech
    synchrony has been understood as completely governed by sophisticated neural-cognitive mechanisms.
    However, gesture–speech synchrony may have its original basis in the resonating forces that travel through the
    body. In the current preregistered study, movements with high physical impact affected phonation in line with
    gesture–speech synchrony as observed in natural contexts. Rhythmic beating of the arms entrained phonation
    acoustics (F0 and the amplitude envelope). Such effects were absent for a condition with low-impetus
    movements (wrist movements) and a condition without movement. Further, movement–phonation synchrony
    was more pronounced when participants were standing as opposed to sitting, indicating a mediating role for
    postural stability. We conclude that gesture–speech synchrony has a biomechanical basis, which will have
    implications for our cognitive, ontogenetic, and phylogenetic understanding of multimodal language.
  • Pouw, W., Rop, G., De Koning, B., & Paas, F. (2019). The cognitive basis for the split-attention effect. Journal of Experimental Psychology: General, 148(11), 2058-2075. doi:10.1037/xge0000578.

    Abstract

    The split-attention effect entails that learning from spatially separated, but mutually referring information
    sources (e.g., text and picture), is less effective than learning from the equivalent spatially integrated
    sources. According to cognitive load theory, impaired learning is caused by the working memory load
    imposed by the need to distribute attention between the information sources and mentally integrate them.
    In this study, we directly tested whether the split-attention effect is caused by spatial separation per se.
    Spatial distance was varied in basic cognitive tasks involving pictures (Experiment 1) and text–picture
    combinations (Experiment 2; preregistered study), and in more ecologically valid learning materials
    (Experiment 3). Experiment 1 showed that having to integrate two pictorial stimuli at greater distances
    diminished performance on a secondary visual working memory task, but did not lead to slower
    integration. When participants had to integrate a picture and written text in Experiment 2, a greater
    distance led to slower integration of the stimuli, but not to diminished performance on the secondary task.
    Experiment 3 showed that presenting spatially separated (compared with integrated) textual and pictorial
    information yielded fewer integrative eye movements, but this was not further exacerbated when
    increasing spatial distance even further. This effect on learning processes did not lead to differences in
    learning outcomes between conditions. In conclusion, we provide evidence that larger distances between
    spatially separated information sources influence learning processes, but that spatial separation on its
    own is not likely to be the only, nor a sufficient, condition for impacting learning outcomes.

    Files private

    Request files
  • Pouw, W., Trujillo, J. P., & Dixon, J. A. (2020). The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking. Behavior Research Methods, 52, 723-740. doi:10.3758/s13428-019-01271-9.

    Abstract

    There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture’s kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field’s classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a “moment of maximum effort” is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech.
  • Powlesland, A. S., Hitchen, P. G., Parry, S., Graham, S. A., Barrio, M. M., Elola, M. T., Mordoh, J., Dell, A., Drickamer, K., & Taylor, M. E. (2009). Targeted glycoproteomic identification of cancer cell glycosylation. Glycobiology, 9, 899-909. doi:10.1093/glycob/cwp065.

    Abstract

    GalMBP is a fragment of serum mannose-binding protein that has been modified to create a probe for galactose-containing ligands. Glycan array screening demonstrated that the carbohydrate-recognition domain of GalMBP selectively binds common groups of tumor-associated glycans, including Lewis-type structures and T antigen, suggesting that engineered glycan-binding proteins such as GalMBP represent novel tools for the characterization of glycoproteins bearing tumor-associated glycans. Blotting of cell extracts and membranes from MCF7 breast cancer cells with radiolabeled GalMBP was used to demonstrate that it binds to a selected set of high molecular weight glycoproteins that could be purified from MCF7 cells on an affinity column constructed with GalMBP. Proteomic and glycomic analysis of these glycoproteins by mass spectrometry showed that they are forms of CD98hc that bear glycans displaying heavily fucosylated termini, including Lewis(x) and Lewis(y) structures. The pool of ligands was found to include the target ligands for anti-CD15 antibodies, which are commonly used to detect Lewis(x) antigen on tumors, and for the endothelial scavenger receptor C-type lectin, which may be involved in tumor metastasis through interactions with this antigen. A survey of additional breast cancer cell lines reveals that there is wide variation in the types of glycosylation that lead to binding of GalMBP. Higher levels of binding are associated either with the presence of outer-arm fucosylated structures carried on a variety of different cell surface glycoproteins or with the presence of high levels of the mucin MUC1 bearing T antigen.

    Additional information

    Powlesland_2009_Suppl_Mat.pdf
  • Preisig, B., Sjerps, M. J., Hervais-Adelman, A., Kösem, A., Hagoort, P., & Riecke, L. (2020). Bilateral gamma/delta transcranial alternating current stimulation affects interhemispheric speech sound integration. Journal of Cognitive Neuroscience, 32(7), 1242-1250. doi:10.1162/jocn_a_01498.

    Abstract

    Perceiving speech requires the integration of different speech cues, that is, formants. When the speech signal is split so that different cues are presented to the right and left ear (dichotic listening), comprehension requires the integration of binaural information. Based on prior electrophysiological evidence, we hypothesized that the integration of dichotically presented speech cues is enabled by interhemispheric phase synchronization between primary and secondary auditory cortex in the gamma frequency band. We tested this hypothesis by applying transcranial alternating current stimulation (TACS) bilaterally above the superior temporal lobe to induce or disrupt interhemispheric gamma-phase coupling. In contrast to initial predictions, we found that gamma TACS applied in-phase above the two hemispheres (interhemispheric lag 0°) perturbs interhemispheric integration of speech cues, possibly because the applied stimulation perturbs an inherent phase lag between the left and right auditory cortex. We also observed this disruptive effect when applying antiphasic delta TACS (interhemispheric lag 180°). We conclude that interhemispheric phase coupling plays a functional role in interhemispheric speech integration. The direction of this effect may depend on the stimulation frequency.
  • Preisig, B., Sjerps, M. J., Kösem, A., & Riecke, L. (2019). Dual-site high-density 4Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping. Brain Stimulation, 12(3), 775-777. doi:10.1016/j.brs.2019.01.007.
  • Preisig, B., & Sjerps, M. J. (2019). Hemispheric specializations affect interhemispheric speech sound integration during duplex perception. The Journal of the Acoustical Society of America, 145, EL190-EL196. doi:10.1121/1.5092829.

    Abstract

    The present study investigated whether speech-related spectral information benefits from initially predominant right or left hemisphere processing. Normal hearing individuals categorized speech sounds composed of an ambiguous base (perceptually intermediate between /ga/ and /da/), presented to one ear, and a disambiguating low or high F3 chirp presented to the other ear. Shorter response times were found when the chirp was presented to the left ear than to the right ear (inducing initially right-hemisphere chirp processing), but no between-ear differences in strength of overall integration. The results are in line with the assumptions of a right hemispheric dominance for spectral processing.

    Additional information

    Supplementary material
  • Protopapas, A., & Gerakaki, S. (2009). Development of processing stress diacritics in reading Greek. Scientific Studies of Reading, 13(6), 453-483. doi:10.1080/10888430903034788.

    Abstract

    In Greek orthography, stress position is marked with a diacritic. We investigated the developmental course of processing the stress diacritic in Grades 2 to 4. Ninety children read 108 pseudowords presented without or with a diacritic either in the same or in a different position relative to the source word. Half of the pseudowords resembled the words they were derived from. Results showed that lexical sources of stress assignment were active in Grade 2 and remained stronger than the diacritic through Grade 4. The effect of the diacritic increased more rapidly and approached the lexical effect with increasing grade. In a second experiment, 90 children read 54 words and 54 pseudowords. The pattern of results for words was similar to that for nonwords suggesting that findings regarding stress assignment using nonwords may generalize to word reading. Decoding of the diacritic does not appear to be the preferred option for developing readers.
  • Prystauka, Y., & Lewis, A. G. (2019). The power of neural oscillations to inform sentence comprehension: A linguistic perspective. Language and Linguistics Compass, 13 (9): e12347. doi:10.1111/lnc3.12347.

    Abstract

    The field of psycholinguistics is currently experiencing an explosion of interest in the analysis of neural oscillations—rhythmic brain activity synchronized at different temporal and spatial levels. Given that language comprehension relies on a myriad of processes, which are carried out in parallel in distributed brain networks, there is hope that this methodology might bring the field closer to understanding some of the more basic (spatially and temporally distributed, yet at the same time often overlapping) neural computations that support language function. In this review, we discuss existing proposals linking oscillatory dynamics in different frequency bands to basic neural computations and review relevant theories suggesting associations between band-specific oscillations and higher-level cognitive processes. More or less consistent patterns of oscillatory activity related to certain types of linguistic processing can already be derived from the evidence that has accumulated over the past few decades. The centerpiece of the current review is a synthesis of such patterns grouped by linguistic phenomenon. We restrict our review to evidence linking measures of oscillatory
    power to the comprehension of sentences, as well as linguistically (and/or pragmatically) more complex structures. For each grouping, we provide a brief summary and a table of associated oscillatory signatures that a psycholinguist might expect to find when employing a particular linguistic task. Summarizing across different paradigms, we conclude that a handful of basic neural oscillatory mechanisms are likely recruited in different ways and at different times for carrying out a variety of linguistic computations.
  • Pylkkänen, L., Martin, A. E., McElree, B., & Smart, A. (2009). The Anterior Midline Field: Coercion or decision making? Brain and Language, 108(3), 184-190. doi:10.1016/j.bandl.2008.06.006.

    Abstract

    To study the neural bases of semantic composition in language processing without confounds from syntactic composition, recent magnetoencephalography (MEG) studies have investigated the processing of constructions that exhibit some type of syntax-semantics mismatch. The most studied case of such a mismatch is complement coercion; expressions such as the author began the book, where an entity-denoting noun phrase is coerced into an eventive meaning in order to match the semantic properties of the event-selecting verb (e.g., ‘the author began reading/writing the book’). These expressions have been found to elicit increased activity in the Anterior Midline Field (AMF), an MEG component elicited at frontomedial sensors at ∼400 ms after the onset of the coercing noun [Pylkkänen, L., & McElree, B. (2007). An MEG study of silent meaning. Journal of Cognitive Neuroscience, 19, 11]. Thus, the AMF constitutes a potential neural correlate of coercion. However, the AMF was generated in ventromedial prefrontal regions, which are heavily associated with decision-making. This raises the possibility that, instead of semantic processing, the AMF effect may have been related to the experimental task, which was a sensicality judgment. We tested this hypothesis by assessing the effect of coercion when subjects were simply reading for comprehension, without a decision-task. Additionally, we investigated coercion in an adjectival rather than a verbal environment to further generalize the findings. Our results show that an AMF effect of coercion is elicited without a decision-task and that the effect also extends to this novel syntactic environment. We conclude that in addition to its role in non-linguistic higher cognition, ventromedial prefrontal regions contribute to the resolution of syntax-semantics mismatches in language processing.
  • Qin, S., Rijpkema, M., Tendolkar, I., Piekema, C., Hermans, E. J., Binder, M., Petersson, K. M., Luo, J., & Fernández, G. (2009). Dissecting medial temporal lobe contributions to item and associative memory formation. NeuroImage, 46, 874-881. doi:10.1016/j.neuroimage.2009.02.039.

    Abstract

    A fundamental and intensively discussed question is whether medial temporal lobe (MTL) processes that lead to non-associative item memories differ in their anatomical substrate from processes underlying associative memory formation. Using event-related functional magnetic resonance imaging, we implemented a novel design to dissociate brain activity related to item and associative memory formation not only by subsequent memory performance and anatomy but also in time, because the two constituents of each pair to be memorized were presented sequentially with an intra-pair delay of several seconds. Furthermore, the design enabled us to reduce potential differences in memory strength between item and associative memory by increasing task difficulty in the item recognition memory test. Confidence ratings for correct item recognition for both constituents did not differ between trials in which only item memory was correct and trials in which item and associative memory were correct. Specific subsequent memory analyses for item and associative memory formation revealed brain activity that appears selectively related to item memory formation in the posterior inferior temporal, posterior parahippocampal, and perirhinal cortices. In contrast, hippocampal and inferior prefrontal activity predicted successful retrieval of newly formed inter-item associations. Our findings therefore suggest that different MTL subregions indeed play distinct roles in the formation of item memory and inter-item associative memory as expected by several dual process models of the MTL memory system.
  • Quinn, S., & Kidd, E. (2019). Symbolic play promotes non‐verbal communicative exchange in infant–caregiver dyads. British Journal of Developmental Psychology, 37(1), 33-50. doi:10.1111/bjdp.12251.

    Abstract

    Symbolic play has long been considered a fertile context for communicative development (Bruner, 1983, Child's talk: Learning to use language, Oxford University Press, Oxford; Vygotsky, 1962, Thought and language, MIT Press, Cambridge, MA; Vygotsky, 1978, Mind in society: The development of higher psychological processes. Harvard University Press, Cambridge, MA). In the current study, we examined caregiver–infant interaction during symbolic play and compared it to interaction in a comparable but non‐symbolic context (i.e., ‘functional’ play). Fifty‐four (N = 54) caregivers and their 18‐month‐old infants were observed engaging in 20 min of play (symbolic, functional). Play interactions were coded and compared across play conditions for joint attention (JA) and gesture use. Compared with functional play, symbolic play was characterized by greater frequency and duration of JA and greater gesture use, particularly the use of iconic gestures with an object in hand. The results suggest that symbolic play provides a rich context for the exchange and negotiation of meaning, and thus may contribute to the development of important skills underlying communicative development.
  • Radenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C. and 5 moreRadenkovic, S., Bird, M. J., Emmerzaal, T. L., Wong, S. Y., Felgueira, C., Stiers, K. M., Sabbagh, L., Himmelreich, N., Poschet, G., Windmolders, P., Verheijen, J., Witters, P., Altassan, R., Honzik, T., Eminoglu, T. F., James, P. M., Edmondson, A. C., Hertecant, J., Kozicz, T., Thiel, C., Vermeersch, P., Cassiman, D., Beamer, L., Morava, E., & Ghesquiere, B. (2019). The metabolic map into the pathomechanism and treatment of PGM1-CDG. American Journal of Human Genetics, 104(5), 835-846. doi:10.1016/j.ajhg.2019.03.003.

    Abstract

    Phosphoglucomutase 1 (PGM1) encodes the metabolic enzyme that interconverts glucose-6-P and glucose-1-P. Mutations in PGM1 cause impairment in glycogen metabolism and glycosylation, the latter manifesting as a congenital disorder of glycosylation (CDG). This unique metabolic defect leads to abnormal N-glycan synthesis in the endoplasmic reticulum (ER) and the Golgi apparatus (GA). On the basis of the decreased galactosylation in glycan chains, galactose was administered to individuals with PGM1-CDG and was shown to markedly reverse most disease-related laboratory abnormalities. The disease and treatment mechanisms, however, have remained largely elusive. Here, we confirm the clinical benefit of galactose supplementation in PGM1-CDG-affected individuals and obtain significant insights into the functional and biochemical regulation of glycosylation. We report here that, by using tracer-based metabolomics, we found that galactose treatment of PGM1-CDG fibroblasts metabolically re-wires their sugar metabolism, and as such replenishes the depleted levels of galactose-1-P, as well as the levels of UDP-glucose and UDP-galactose, the nucleotide sugars that are required for ER- and GA-linked glycosylation, respectively. To this end, we further show that the galactose in UDP-galactose is incorporated into mature, de novo glycans. Our results also allude to the potential of monosaccharide therapy for several other CDG.
  • Räsänen, O., Seshadri, S., Karadayi, J., Riebling, E., Bunce, J., Cristia, A., Metze, F., Casillas, M., Rosemberg, C., Bergelson, E., & Soderstrom, M. (2019). Automatic word count estimation from daylong child-centered recordings in various language environments using language-independent syllabification of speech. Speech Communication, 113, 63-80. doi:10.1016/j.specom.2019.08.005.

    Abstract

    Automatic word count estimation (WCE) from audio recordings can be used to quantify the amount of verbal communication in a recording environment. One key application of WCE is to measure language input heard by infants and toddlers in their natural environments, as captured by daylong recordings from microphones worn by the infants. Although WCE is nearly trivial for high-quality signals in high-resource languages, daylong recordings are substantially more challenging due to the unconstrained acoustic environments and the presence of near- and far-field speech. Moreover, many use cases of interest involve languages for which reliable ASR systems or even well-defined lexicons are not available. A good WCE system should also perform similarly for low- and high-resource languages in order to enable unbiased comparisons across different cultures and environments. Unfortunately, the current state-of-the-art solution, the LENA system, is based on proprietary software and has only been optimized for American English, limiting its applicability. In this paper, we build on existing work on WCE and present the steps we have taken towards a freely available system for WCE that can be adapted to different languages or dialects with a limited amount of orthographically transcribed speech data. Our system is based on language-independent syllabification of speech, followed by a language-dependent mapping from syllable counts (and a number of other acoustic features) to the corresponding word count estimates. We evaluate our system on samples from daylong infant recordings from six different corpora consisting of several languages and socioeconomic environments, all manually annotated with the same protocol to allow direct comparison. We compare a number of alternative techniques for the two key components in our system: speech activity detection and automatic syllabification of speech. As a result, we show that our system can reach relatively consistent WCE accuracy across multiple corpora and languages (with some limitations). In addition, the system outperforms LENA on three of the four corpora consisting of different varieties of English. We also demonstrate how an automatic neural network-based syllabifier, when trained on multiple languages, generalizes well to novel languages beyond the training data, outperforming two previously proposed unsupervised syllabifiers as a feature extractor for WCE.
  • Rasenberg, M., Ozyurek, A., & Dingemanse, M. (2020). Alignment in multimodal interaction: An integrative framework. Cognitive Science, 44(11): e12911. doi:10.1111/cogs.12911.

    Abstract

    When people are engaged in social interaction, they can repeat aspects of each other’s communicative behavior, such as words or gestures. This kind of behavioral alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), while other dimensions remain understudied. This hampers theory testing and building, which requires a well‐defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: time, sequence, meaning, form, and modality. We show how assumptions regarding the underlying mechanism of alignment (placed along the continuum of priming vs. grounding) pattern together with operationalizations in terms of the five dimensions. This integrative framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment, and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses regarding alignment.
  • Rasenberg, M., Rommers, J., & Van Bergen, G. (2020). Anticipating predictability: An ERP investigation of expectation-managing discourse markers in dialogue comprehension. Language, Cognition and Neuroscience, 35(1), 1-16. doi:10.1080/23273798.2019.1624789.

    Abstract

    n two ERP experiments, we investigated how the Dutch discourse markers eigenlijk “actually”, signalling expectation disconfirmation, and inderdaad “indeed”, signalling expectation confirmation, affect incremental dialogue comprehension. We investigated their effects on the processing of subsequent (un)predictable words, and on the quality of word representations in memory. Participants read dialogues with (un)predictable endings that followed a discourse marker (eigenlijk in Experiment 1, inderdaad in Experiment 2) or a control adverb. We found no strong evidence that discourse markers modulated online predictability effects elicited by subsequently read words. However, words following eigenlijk elicited an enhanced posterior post-N400 positivity compared with words following an adverb regardless of their predictability, potentially reflecting increased processing costs associated with pragmatically driven discourse updating. No effects of inderdaad were found on online processing, but inderdaad seemed to influence memory for (un)predictable dialogue endings. These findings nuance our understanding of how pragmatic markers affect incremental language comprehension.

    Additional information

    plcp_a_1624789_sm6686.docx
  • Ravignani, A., & Kotz, S. (2020). Breathing, voice and synchronized movement. Proceedings of the National Academy of Sciences of the United States of America, 117(38), 23223-23224. doi:10.1073/pnas.2011402117.
  • Ravignani, A. (2019). [Review of the book Animal beauty: On the evolution of bological aesthetics by C. Nüsslein-Volhard]. Animal Behaviour, 155, 171-172. doi:10.1016/j.anbehav.2019.07.005.
  • Ravignani, A. (2019). [Review of the book The origins of musicality ed. by H. Honing]. Perception, 48(1), 102-105. doi:10.1177/0301006618817430.
  • Ravignani, A. (2019). Humans and other musical animals [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Current Biology, 29(8), R271-R273. doi:10.1016/j.cub.2019.03.013.
  • Ravignani, A., & de Reus, K. (2019). Modelling animal interactive rhythms in communication. Evolutionary Bioinformatics, 15, 1-14. doi:10.1177/1176934318823558.

    Abstract

    Time is one crucial dimension conveying information in animal communication. Evolution has shaped animals’ nervous systems to produce signals with temporal properties fitting their socio-ecological niches. Many quantitative models of mechanisms underlying rhythmic behaviour exist, spanning insects, crustaceans, birds, amphibians, and mammals. However, these computational and mathematical models are often presented in isolation. Here, we provide an overview of the main mathematical models employed in the study of animal rhythmic communication among conspecifics. After presenting basic definitions and mathematical formalisms, we discuss each individual model. These computational models are then compared using simulated data to uncover similarities and key differences in the underlying mechanisms found across species. Our review of the empirical literature is admittedly limited. We stress the need of using comparative computer simulations – both before and after animal experiments – to better understand animal timing in interaction. We hope this article will serve as a potential first step towards a common computational framework to describe temporal interactions in animals, including humans.

    Additional information

    Supplemental material files
  • Ravignani, A., Verga, L., & Greenfield, M. D. (2019). Interactive rhythms across species: The evolutionary biology of animal chorusing and turn-taking. Annals of the New York Academy of Sciences, 1453(1), 12-21. doi:10.1111/nyas.14230.

    Abstract

    The study of human language is progressively moving toward comparative and interactive frameworks, extending the concept of turn‐taking to animal communication. While such an endeavor will help us understand the interactive origins of language, any theoretical account for cross‐species turn‐taking should consider three key points. First, animal turn‐taking must incorporate biological studies on animal chorusing, namely how different species coordinate their signals over time. Second, while concepts employed in human communication and turn‐taking, such as intentionality, are still debated in animal behavior, lower level mechanisms with clear neurobiological bases can explain much of animal interactive behavior. Third, social behavior, interactivity, and cooperation can be orthogonal, and the alternation of animal signals need not be cooperative. Considering turn‐taking a subset of chorusing in the rhythmic dimension may avoid overinterpretation and enhance the comparability of future empirical work.
  • Ravignani, A. (2019). Everything you always wanted to know about sexual selection in 129 pages [Review of the book Sexual selection: A very short introduction by M. Zuk and L. W. Simmons]. Journal of Mammalogy, 100(6), 2004-2005. doi:10.1093/jmammal/gyz168.
  • Ravignani, A., & Gamba, M. (2019). Evolving musicality [Review of the book The evolving animal orchestra: In search of what makes us musical by Henkjan Honing]. Trends in Ecology and Evolution, 34(7), 583-584. doi:10.1016/j.tree.2019.04.016.
  • Ravignani, A., Kello, C. T., de Reus, K., Kotz, S. A., Dalla Bella, S., Mendez-Arostegui, M., Rapado-Tamarit, B., Rubio-Garcia, A., & de Boer, B. (2019). Ontogeny of vocal rhythms in harbor seal pups: An exploratory study. Current Zoology, 65(1), 107-120. doi:10.1093/cz/zoy055.

    Abstract

    Puppyhood is a very active social and vocal period in a harbor seal's life Phoca vitulina. An important feature of vocalizations is their temporal and rhythmic structure, and understanding vocal timing and rhythms in harbor seals is critical to a cross-species hypothesis in evolutionary neuroscience that links vocal learning, rhythm perception, and synchronization. This study utilized analytical techniques that may best capture rhythmic structure in pup vocalizations with the goal of examining whether (1) harbor seal pups show rhythmic structure in their calls and (2) rhythms evolve over time. Calls of 3 wild-born seal pups were recorded daily over the course of 1-3 weeks; 3 temporal features were analyzed using 3 complementary techniques. We identified temporal and rhythmic structure in pup calls across different time windows. The calls of harbor seal pups exhibit some degree of temporal and rhythmic organization, which evolves over puppyhood and resembles that of other species' interactive communication. We suggest next steps for investigating call structure in harbor seal pups and propose comparative hypotheses to test in other pinniped species.
  • Ravignani, A., Filippi, P., & Fitch, W. T. (2019). Perceptual tuning influences rule generalization: Testing humans with monkey-tailored stimuli. i-Perception, 10(2), 1-5. doi:10.1177/2041669519846135.

    Abstract

    Comparative research investigating how nonhuman animals generalize patterns of auditory stimuli often uses sequences of human speech syllables and reports limited generalization abilities in animals. Here, we reverse this logic, testing humans with stimulus sequences tailored to squirrel monkeys. When test stimuli are familiar (human voices), humans succeed in two types of generalization. However, when the same structural rule is instantiated over unfamiliar but perceivable sounds within squirrel monkeys’ optimal hearing frequency range, human participants master only one type of generalization. These findings have methodological implications for the design of comparative experiments, which should be fair towards all tested species’ proclivities and limitations.

    Additional information

    Supplemental material files
  • Ravignani, A. (2019). Singing seals imitate human speech. Journal of Experimental Biology, 222: jeb208447. doi:10.1242/jeb.208447.
  • Ravignani, A. (2019). Rhythm and synchrony in animal movement and communication. Current Zoology, 65(1), 77-81. doi:10.1093/cz/zoy087.

    Abstract

    Animal communication and motoric behavior develop over time. Often, this temporal dimension has communicative relevance and is organized according to structural patterns. In other words, time is a crucial dimension for rhythm and synchrony in animal movement and communication. Rhythm is defined as temporal structure at a second-millisecond time scale (Kotz et al. 2018). Synchrony is defined as precise co-occurrence of 2 behaviors in time (Ravignani 2017).

    Rhythm, synchrony, and other forms of temporal interaction are taking center stage in animal behavior and communication. Several critical questions include, among others: what species show which rhythmic predispositions? How does a species’ sensitivity for, or proclivity towards, rhythm arise? What are the species-specific functions of rhythm and synchrony, and are there functional trends across species? How did similar or different rhythmic behaviors evolved in different species? This Special Column aims at collecting and contrasting research from different species, perceptual modalities, and empirical methods. The focus is on timing, rhythm and synchrony in the second-millisecond range.

    Three main approaches are commonly adopted to study animal rhythms, with a focus on: 1) spontaneous individual rhythm production, 2) group rhythms, or 3) synchronization experiments. I concisely introduce them below (see also Kotz et al. 2018; Ravignani et al. 2018).
  • Ravignani, A., Dalla Bella, S., Falk, S., Kello, C. T., Noriega, F., & Kotz, S. A. (2019). Rhythm in speech and animal vocalizations: A cross‐species perspective. Annals of the New York Academy of Sciences, 1453(1), 79-98. doi:10.1111/nyas.14166.

    Abstract

    Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross‐species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross‐species perspective of speech rhythm, our review puts some pieces of the puzzle together.
  • Ravignani, A. (2019). Seeking shared ground in space. Science, 366(6466), 696. doi:10.1126/science.aay6955.
  • Ravignani, A. (2019). Timing of antisynchronous calling: A case study in a harbor seal pup (Phoca vitulina). Journal of Comparative Psychology, 133(2), 272-277. doi:10.1037/com0000160.

    Abstract

    Alternative mathematical models predict differences in how animals adjust the timing of their calls. Differences can be measured as the effect of the timing of a conspecific call on the rate and period of calling of a focal animal, and the lag between the two. Here, I test these alternative hypotheses by tapping into harbor seals’ (Phoca vitulina) mechanisms for spontaneous timing. Both socioecology and vocal behavior of harbor seals make them an interesting model species to study call rhythm and timing. Here, a wild-born seal pup was tested in controlled laboratory conditions. Based on previous recordings of her vocalizations and those of others, I designed playback experiments adapted to that specific animal. The call onsets of the animal were measured as a function of tempo, rhythmic regularity, and spectral properties of the playbacks. The pup adapted the timing of her calls in response to conspecifics’ calls. Rather than responding at a fixed time delay, the pup adjusted her calls’ onset to occur at a fraction of the playback tempo, showing a relative-phase antisynchrony. Experimental results were confirmed via computational modeling. This case study lends preliminary support to a classic mathematical model of animal behavior—Hamilton’s selfish herd—in the acoustic domain.
  • Ravignani, A. (2019). Understanding mammals, hands-on [Review of the book Mammalogy techniques lab manual by J. M. Ryan]. Journal of Mammalogy, 100(5), 1695-1696. doi:10.1093/jmammal/gyz132.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Larger communities create more systematic languages. Proceedings of the Royal Society B: Biological Sciences, 286(1907): 20191262. doi:10.1098/rspb.2019.1262.

    Abstract

    Understanding worldwide patterns of language diversity has long been a goal for evolutionary scientists, linguists and philosophers. Research over the past decade has suggested that linguistic diversity may result from differences in the social environments in which languages evolve. Specifically, recent work found that languages spoken in larger communities typically have more systematic grammatical structures. However, in the real world, community size is confounded with other social factors such as network structure and the number of second languages learners in the community, and it is often assumed that linguistic simplification is driven by these factors instead. Here, we show that in contrast to previous assumptions, community size has a unique and important influence on linguistic structure. We experimentally examine the live formation of new languages created in the laboratory by small and larger groups, and find that larger groups of interacting participants develop more systematic languages over time, and do so faster and more consistently than small groups. Small groups also vary more in their linguistic behaviours, suggesting that small communities are more vulnerable to drift. These results show that community size predicts patterns of language diversity, and suggest that an increase in community size might have contributed to language evolution.

Share this page