Publications

Displaying 301 - 400 of 1294
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Ernestus, M. (2012). Segmental within-speaker variation. In A. C. Cohn, C. Fougeron, & M. K. Huffman (Eds.), The Oxford handbook of laboratory phonology (pp. 93-102). New York: Oxford University Press.
  • Eryilmaz, K., & Little, H. (2017). Using Leap Motion to investigate the emergence of structure in speech and language. Behavior Research Methods, 49(5), 1748-1768. doi:10.3758/s13428-016-0818-x.

    Abstract

    In evolutionary linguistics, experiments using artificial signal spaces are being used to investigate the emergence of speech structure. These signal spaces need to be continuous, non-discretised spaces from which discrete units and patterns can emerge. They need to be dissimilar from - but comparable with - the vocal-tract, in order to minimise interference from pre-existing linguistic knowledge, while informing us about language. This is a hard balance to strike. This article outlines a new approach which uses the Leap Motion, an infra-red controller which can convert manual movement in 3d space into sound. The signal space using this approach is more flexible than signal spaces in previous attempts. Further, output data using this approach is simpler to arrange and analyse. The experimental interface was built using free, and mostly open source libraries in Python. We provide our source code for other researchers as open source.
  • Escudero, P., Simon, E., & Mitterer, H. (2012). The perception of English front vowels by North Holland and Flemish listeners: Acoustic similarity predicts and explains cross-linguistic and L2 perception. Journal of Phonetics, 40, 280-288. doi:10.1016/j.wocn.2011.11.004.

    Abstract

    We investigated whether regional differences in the native language (L1) influence the perception of second language (L2) sounds. Many cross-language and L2 perception studies have assumed that the degree of acoustic similarity between L1 and L2 sounds predicts cross-linguistic and L2 performance. The present study tests this assumption by examining the perception of the English contrast between /e{open}/ and /æ/ in native speakers of Dutch spoken in North Holland (the Netherlands) and in East- and West-Flanders (Belgium). A Linear Discriminant Analysis on acoustic data from both dialects showed that their differences in vowel production, as reported in and Adank, van Hout, and Van de Velde (2007), should influence the perception of the L2 vowels if listeners focus on the vowels' acoustic/auditory properties. Indeed, the results of categorization tasks with Dutch or English vowels as response options showed that the two listener groups differed as predicted by the discriminant analysis. Moreover, the results of the English categorization task revealed that both groups of Dutch listeners displayed the asymmetric pattern found in previous word recognition studies, i.e. English /æ/ was more frequently confused with English /e{open}/ than the reverse. This suggests a strong link between previous L2 word learning results and the present L2 perceptual assimilation patterns.
  • Esteve-Gibert, N., Prieto, P., & Liszkowski, U. (2017). Twelve-month-olds understand social intentions based on prosody and gesture shape. Infancy, 22, 108-129. doi:10.1111/infa.12146.

    Abstract

    Infants infer social and pragmatic intentions underlying attention-directing gestures, but the basis on which infants make these inferences is not well understood. Previous studies suggest that infants rely on information from preceding shared action contexts and joint perceptual scenes. Here, we tested whether 12-month-olds use information from act-accompanying cues, in particular prosody and hand shape, to guide their pragmatic understanding. In Experiment 1, caregivers directed infants’ attention to an object to request it, share interest in it, or inform them about a hidden aspect. Caregivers used distinct prosodic and gestural patterns to express each pragmatic intention. Experiment 2 was identical except that experimenters provided identical lexical information across conditions and used three sets of trained prosodic and gestural patterns. In all conditions, the joint perceptual scenes and preceding shared action contexts were identical. In both experiments, infants reacted appropriately to the adults’ intentions by attending to the object mostly in the sharing interest condition, offering the object mostly in the imperative condition, and searching for the referent mostly in the informing condition. Infants’ ability to comprehend pragmatic intentions based on prosody and gesture shape expands infants’ communicative understanding from common activities to novel situations for which shared background knowledge is missing.
  • Estruch, S. B., Buzon, V., Carbo, L. R., Schorova, L., Luders, J., & Estebanez-Perpina, E. (2012). The oncoprotein BCL11A binds to Orphan Nuclear Receptor TLX and potentiates its transrepressive function. PLoS One, 7(6): e37963. doi:10.1371/journal.pone.0037963.

    Abstract

    Nuclear orphan receptor TLX (NR2E1) functions primarily as a transcriptional repressor and its pivotal role in brain development, glioblastoma, mental retardation and retinopathologies make it an attractive drug target. TLX is expressed in the neural stem cells (NSCs) of the subventricular zone and the hippocampus subgranular zone, regions with persistent neurogenesis in the adult brain, and functions as an essential regulator of NSCs maintenance and self-renewal. Little is known about the TLX social network of interactors and only few TLX coregulators are described. To identify and characterize novel TLX-binders and possible coregulators, we performed yeast-two-hybrid (Y2H) screens of a human adult brain cDNA library using different TLX constructs as baits. Our screens identified multiple clones of Atrophin-1 (ATN1), a previously described TLX interactor. In addition, we identified an interaction with the oncoprotein and zinc finger transcription factor BCL11A (CTIP1/Evi9), a key player in the hematopoietic system and in major blood-related malignancies. This interaction was validated by expression and coimmunoprecipitation in human cells. BCL11A potentiated the transrepressive function of TLX in an in vitro reporter gene assay. Our work suggests that BCL11A is a novel TLX coregulator that might be involved in TLX-dependent gene regulation in the brain.
  • Evans, N., Levinson, S. C., & Sterelny, K. (2021). Kinship revisited. Biological theory, 16, 123-126. doi:10.1007/s13752-021-00384-9.
  • Evans, N., Levinson, S. C., & Sterelny, K. (Eds.). (2021). Thematic issue on evolution of kinship systems [Special Issue]. Biological theory, 16.
  • Eviatar, Z., & Huettig, F. (Eds.). (2021). Literacy and writing systems [Special Issue]. Journal of Cultural Cognitive Science.
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Fahrenfort, J. J., Snijders, T. M., Heinen, K., van Gaal, S., & Scholte, H. S. (2012). Neuronal integration in visual cortex elevates face category tuning to conscious face perception. Proceedings of the National Academy of Sciences of the United States of America, 109(52), 21504-21509. doi:10.1073/pnas.1207414110.
  • Falk, J. J., Zhang, Y., Scheutz, M., & Yu, C. (2021). Parents adaptively use anaphora during parent-child social interaction. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 1472-1478). Vienna: Cognitive Science Society.

    Abstract

    Anaphora, a ubiquitous feature of natural language, poses a particular challenge to young children as they first learn language due to its referential ambiguity. In spite of this, parents and caregivers use anaphora frequently in child-directed speech, potentially presenting a risk to effective communication if children do not yet have the linguistic capabilities of resolving anaphora successfully. Through an eye-tracking study in a naturalistic free-play context, we examine the strategies that parents employ to calibrate their use of anaphora to their child's linguistic development level. We show that, in this way, parents are able to intuitively scaffold the complexity of their speech such that greater referential ambiguity does not hurt overall communication success.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Fawcett, C., & Liszkowski, U. (2012). Infants anticipate others’ social preferences. Infant and Child Development, 21, 239-249. doi:10.1002/icd.739.

    Abstract

    In the current eye-tracking study, we explored whether 12-month-old infants can predict others' social preferences. We showed infants scenes in which two characters alternately helped or hindered an agent in his goal of climbing a hill. In a control condition, the two characters moved up and down the hill in identical ways to the helper and hinderer but did not make contact with the agent; thus, they did not cause him to reach or not reach her or his goal. Following six alternating familiarization trials of helping and hindering interactions (help-hinder condition) or up and down interactions (up-down condition), infants were shown one test trial in which they could visually anticipate the agent approaching one of the two characters. As predicted, infants in the help-hinder condition made significantly more visual anticipations toward the helping than hindering character, suggesting that they predicted the agent to approach the helping character. In contrast, infants revealed no difference in visual anticipations between the up and down characters. The up-down condition served to control for low-level perceptual explanations of the results for the help-hinder condition. Thus, together the results reveal that 12-month-old infants make predictions about others' behaviour and social preferences from a third-party perspective.
  • Fawcett, C., & Liszkowski, U. (2012). Mimicry and play initiation in 18-month-old infants. Infant Behavior and Development, 35, 689-696. doi:10.1016/j.infbeh.2012.07.014.

    Abstract

    Across two experiments, we examined the relationship between 18-month-old infants’ mimicry and social behavior – particularly invitations to play with an adult play partner. In Experiment 1, we manipulated whether an adult mimicked the infant's play or not during an initial play phase. We found that infants who had been mimicked were subsequently more likely to invite the adult to join their play with a new toy. In addition, they reenacted marginally more steps from a social learning demonstration she gave. In Experiment 2, infants had the chance to spontaneously mimic the adult during the play phase. Complementing Experiment 1, those infants who spent more time mimicking the adult were more likely to invite her to play with a new toy. This effect was specific to play and not apparent in other communicative acts, such as directing the adult's attention to an event or requesting toys. Together, the results suggest that infants use mimicry as a tool to establish social connections with others and that mimicry has specific influences on social behaviors related to initiating subsequent joint interactions.
  • Fawcett, C., & Liszkowski, U. (2012). Observation and initiation of joint action in infants. Child Development, 83, 434-441. doi:10.1111/j.1467-8624.2011.01717.x.

    Abstract

    Infants imitate others’ individual actions, but do they also replicate others’ joint activities? To examine whether observing joint action influences infants’ initiation of joint action, forty-eight 18-month-old infants observed object demonstrations by 2 models acting together (joint action), 2 models acting individually (individual action), or 1 model acting alone (solitary action). Infants’ behavior was examined after they were given each object. Infants in the joint action condition attempted to initiate joint action more often than infants in the other conditions, yet they were equally likely to communicate for other reasons and to imitate the demonstrated object-directed actions. The findings suggest that infants learn to replicate others’ joint activity through observation, an important skill for cultural transmission of shared practices.
  • Fedden, S., & Boroditsky, L. (2012). Spatialization of time in Mian. Frontiers in Psychology, 3, 485. doi:10.3389/fpsyg.2012.00485.

    Abstract

    We examine representations of time among the Mianmin of Papua New Guinea. We begin by describing the patterns of spatial and temporal reference in Mian. Mian uses a system of spatial terms that derive from the orientation and direction of the Hak and Sek rivers and the surrounding landscape. We then report results from a temporal arrangement task administered to a group of Mian speakers. The results reveal evidence for a variety of temporal representations. Some participants arranged time with respect to their bodies (left to right or toward the body). Others arranged time as laid out on the landscape, roughly along the east/west axis (either east to west or west to east). This absolute pattern is consistent both with the axis of the motion of the sun and the orientation of the two rivers, which provides the basis for spatial reference in the Mian language. The results also suggest an increase in left-to-right temporal representations with increasing years of formal education (and the reverse pattern for absolute spatial representations for time). These results extend previous work on spatial representations for time to a new geographical region, physical environment, and linguistic and cultural system.
  • Felker, E. R., Broersma, M., & Ernestus, M. (2021). The role of corrective feedback and lexical guidance in perceptual learning of a novel L2 accent in dialogue. Applied Psycholinguistics, 42, 1029-1055. doi:10.1017/S0142716421000205.

    Abstract

    Perceptual learning of novel accents is a critical skill for second-language speech perception, but little is known about the mechanisms that facilitate perceptual learning in communicative contexts. To study perceptual learning in an interactive dialogue setting while maintaining experimental control of the phonetic input, we employed an innovative experimental method incorporating prerecorded speech into a naturalistic conversation. Using both computer-based and face-to-face dialogue settings, we investigated the effect of two types of learning mechanisms in interaction: explicit corrective feedback and implicit lexical guidance. Dutch participants played an information-gap game featuring minimal pairs with an accented English speaker whose /ε/ pronunciations were shifted to /ɪ/. Evidence for the vowel shift came either from corrective feedback about participants’ perceptual mistakes or from onscreen lexical information that constrained their interpretation of the interlocutor’s words. Corrective feedback explicitly contrasting the minimal pairs was more effective than generic feedback. Additionally, both receiving lexical guidance and exhibiting more uptake for the vowel shift improved listeners’ subsequent online processing of accented words. Comparable learning effects were found in both the computer-based and face-to-face interactions, showing that our results can be generalized to a more naturalistic learning context than traditional computer-based perception training programs.
  • Felker, E. R. (2021). Learning second language speech perception in natural settings. PhD Thesis, Radboud University, Nijmegen.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material
  • Ferrari, A., & Noppeney, U. (2021). Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biology, 19(11): e3001465. doi:10.1371/journal.pbio.3001465.

    Abstract

    To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

    Additional information

    supporting information
  • Ferreri, A., Ponzoni, M., Govi, S., Pasini, E., Mappa, S., Vino, A., Facchetti, F., Vezzoli, P., Doglioni, C., Berti, E., & Dolcetti, R. (2012). Prevalence of chlamydial infection in a series of 108 primary cutaneous lymphomas. British Journal of Dermatology, 166(5), 1121-1123. doi:10.1111/j.1365-2133.2011.10704.x.
  • Fessler, D. M., Stieger, S., Asaridou, S. S., Bahia, U., Cravalho, M., de Barros, P., Delgado, T., Fisher, M. L., Frederick, D., Perez, P. G., Goetz, C., Haley, K., Jackson, J., Kushnick, G., Lew, K., Pain, E., Florindo, P. P., Pisor, A., Sinaga, E., Sinaga, L. and 3 moreFessler, D. M., Stieger, S., Asaridou, S. S., Bahia, U., Cravalho, M., de Barros, P., Delgado, T., Fisher, M. L., Frederick, D., Perez, P. G., Goetz, C., Haley, K., Jackson, J., Kushnick, G., Lew, K., Pain, E., Florindo, P. P., Pisor, A., Sinaga, E., Sinaga, L., Smolich, L., Sun, D. M., & Voracek, M. (2012). Testing a postulated case of intersexual selection in humans: The role of foot size in judgments of physical attractiveness and age. Evolution and Human Behavior, 33, 147-164. doi:10.1016/j.evolhumbehav.2011.08.002.

    Abstract

    The constituents of attractiveness differ across the sexes. Many relevant traits are dimorphic, suggesting that they are the product of intersexual selection. However, direction of causality is generally difficult to determine, as aesthetic criteria can as readily result from, as cause, dimorphism. Women have proportionately smaller feet than men. Prior work on the role of foot size in attractiveness suggests an asymmetry across the sexes, as small feet enhance female appearance, yet average, rather than large, feet are preferred on men. Previous investigations employed crude stimuli and limited samples. Here, we report on multiple cross-cultural studies designed to overcome these limitations. With the exception of one rural society, we find that small foot size is preferred when judging women, yet no equivalent preference applies to men. Similarly, consonant with the thesis that a preference for youth underlies intersexual selection acting on women, we document an inverse relationship between foot size and perceived age. Examination of preferences regarding, and inferences from, feet viewed in isolation suggests different roles for proportionality and absolute size in judgments of female and male bodies. Although the majority of these results bolster the conclusion that pedal dimorphism is the product of intersexual selection, the picture is complicated by the reversal of the usual preference for small female feet found in one rural society. While possibly explicable in terms of greater emphasis on female economic productivity relative to beauty, the latter finding underscores the importance of employing diverse samples when exploring postulated evolved aesthetic preferences.

    Additional information

    Fessler_2011_Suppl_material.pdf
  • Filippi, P., Charlton, B. D., & Fitch, W. T. (2012). Do Women Prefer More Complex Music around Ovulation? PLoS One, 7(4): e35626. doi:10.1371/journal.pone.0035626.

    Abstract

    The evolutionary origins of music are much debated. One theory holds that the ability to produce complex musical sounds might reflect qualities that are relevant in mate choice contexts and hence, that music is functionally analogous to the sexually-selected acoustic displays of some animals. If so, women may be expected to show heightened preferences for more complex music when they are most fertile. Here, we used computer-generated musical pieces and ovulation predictor kits to test this hypothesis. Our results indicate that women prefer more complex music in general; however, we found no evidence that their preference for more complex music increased around ovulation. Consequently, our findings are not consistent with the hypothesis that a heightened preference/bias in women for more complex music around ovulation could have played a role in the evolution of music. We go on to suggest future studies that could further investigate whether sexual selection played a role in the evolution of this universal aspect of human culture.
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pasukonis, A., Hoeschele, M., Ocklenburg, S., De Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284: 20170990. doi:10.1098/rspb.2017.0990.

    Abstract

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
  • Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A., & De Boer, B. (2017). Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Current Zoology, 63(4), 445-456. doi:10.1093/cz/zox035.

    Abstract

    The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low-and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

    Additional information

    zox035_Supp.zip
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Güntürkün, O., Newen, A., & de Boer, B. (2017). More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cognition & Emotion, 31(5), 879-891. doi:10.1080/02699931.2016.1177489.

    Abstract

    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
  • Filippi, P. (2012). Sintassi, Prosodia e Socialità: le Origini del Linguaggio Verbale. PhD Thesis, Università degli Studi di Palermo, Palermo.

    Abstract

    What is the key cognitive ability that makes humans unique among all the other animals? Our work aims at contributing to this research question adopting a comparative and philosophical approach to the origins of verbal language. In particular, we adopt three strands of analysis that are relevant in the context of comparative investigation on the the origins of verbal language: a) research on the evolutionary ‘homologies’, which provides information on the phylogenetic traits that humans and other primates share with their common ancestor; b) investigations on “analogous” traits, aimed at finding the evolutionary pressures that guided the emergence of the same biological traits that evolved independently in phylogenetically distant species; the ontogenetic development of the ability to produce and understand verbal language in human infants. Within this comparative approach, we focus on three key apsects that we addressed bridging recent empiric evidence on language processing with philosophical investigations on verbal language: (i) pattern processing as a biologocal precursor of syntax and algebraic rule acquisition, (ii) sound modulation as a guide to pattern comprehension in speech, animal vocalization and music, (iii) social strategies for mutual understanding, survival and group cohesion. We conclude emphasizing the interplay between these three sets of cognitive processes as a fundamental dimension grounding the emergence of the human ability for propositional language.
  • Filippi, P., Laaha, S., & Fitch, W. T. (2017). Utterance-final position and pitch marking aid word learning in school-age children. Royal Society Open Science, 4: 161035. doi:10.1098/rsos.161035.

    Abstract

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
  • Fink, B., Bläsing, B., Ravignani, A., & Shackelford, T. K. (2021). Evolution and functions of human dance. Evolution and Human Behavior, 42(4), 351-360. doi:10.1016/j.evolhumbehav.2021.01.003.

    Abstract

    Dance is ubiquitous among humans and has received attention from several disciplines. Ethnographic documentation suggests that dance has a signaling function in social interaction. It can influence mate preferences and facilitate social bonds. Research has provided insights into the proximate mechanisms of dance, individually or when dancing with partners or in groups. Here, we review dance research from an evolutionary perspective. We propose that human dance evolved from ordinary (non-communicative) movements to communicate socially relevant information accurately. The need for accurate social signaling may have accompanied increases in group size and population density. Because of its complexity in production and display, dance may have evolved as a vehicle for expressing social and cultural information. Mating-related qualities and motives may have been the predominant information derived from individual dance movements, whereas group dance offers the opportunity for the exchange of socially relevant content, for coordinating actions among group members, for signaling coalitional strength, and for stabilizing group structures. We conclude that, despite the cultural diversity in dance movements and contexts, the primary communicative functions of dance may be the same across societies.
  • Fisher, N., Hadley, L., Corps, R. E., & Pickering, M. (2021). The effects of dual-task interference in predicting turn-ends in speech and music. Brain Research, 1768: 147571. doi:10.1016/j.brainres.2021.147571.

    Abstract

    Determining when a partner’s spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner’s action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E. (2017). Evolution of language: Lessons from the genome. Psychonomic Bulletin & Review, 24(1), 34-40. doi: 10.3758/s13423-016-1112-8.

    Abstract

    The post-genomic era is an exciting time for researchers interested in the biology of speech and language. Substantive advances in molecular methodologies have opened up entire vistas of investigation that were not previously possible, or in some cases even imagined. Speculations concerning the origins of human cognitive traits are being transformed into empirically addressable questions, generating specific hypotheses that can be explicitly tested using data collected from both the natural world and experimental settings. In this article, I discuss a number of promising lines of research in this area. For example, the field has begun to identify genes implicated in speech and language skills, including not just disorders but also the normal range of abilities. Such genes provide powerful entry points for gaining insights into neural bases and evolutionary origins, using sophisticated experimental tools from molecular neuroscience and developmental neurobiology. At the same time, sequencing of ancient hominin genomes is giving us an unprecedented view of the molecular genetic changes that have occurred during the evolution of our species. Synthesis of data from these complementary sources offers an opportunity to robustly evaluate alternative accounts of language evolution. Of course, this endeavour remains challenging on many fronts, as I also highlight in the article. Nonetheless, such an integrated approach holds great potential for untangling the complexities of the capacities that make us human.
  • Fisher, V. J. (2017). Dance as Embodied Analogy: Designing an Empirical Research Study. In M. Van Delft, J. Voets, Z. Gündüz, H. Koolen, & L. Wijers (Eds.), Danswetenschap in Nederland. Utrecht: Vereniging voor Dansonderzoek (VDO).
  • Fisher, V. J. (2021). Embodied songs: Insights into the nature of cross-modal meaning-making within sign language informed, embodied interpretations of vocal music. Frontiers in Psychology, 12: 624689. doi:10.3389/fpsyg.2021.624689.

    Abstract

    Embodied song practices involve the transformation of songs from the acoustic modality into an embodied-visual form, to increase meaningful access for d/Deaf audiences. This goes beyond the translation of lyrics, by combining poetic sign language with other bodily movements to embody the para-linguistic expressive and musical features that enhance the message of a song. To date, the limited research into this phenomenon has focussed on linguistic features and interactions with rhythm. The relationship between bodily actions and music has not been probed beyond an assumed implication of conformance. However, as the primary objective is to communicate equivalent meanings, the ways that the acoustic and embodied-visual signals relate to each other should reveal something about underlying conceptual agreement. This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences. These theories are applied to embodiment strategies used by prominent d/Deaf and hearing Dutch practitioners, to unpack the relationship between acoustic songs, their embodied representations, and their broader conceptual and affective meanings. This leads to the proposition that meaning primarily arises through shared patterns of internal relations across a range of amodal and cross-modal features with an emphasis on dynamic qualities. These analogous patterns can inform metaphorical interpretations and trigger shared emotional responses. This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research.
  • Fisher, V. J. (2017). Unfurling the wings of flight: Clarifying ‘the what’ and ‘the why’ of mental imagery use in dance. Research in Dance Education, 18(3), 252-272. doi:10.1080/14647893.2017.1369508.

    Abstract

    This article provides clarification regarding ‘the what’ and ‘the why’ of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from ‘direct’ body oriented simulations along a continuum employing analogous mapping through ‘semi-direct’ literal similarities to abstract metaphors. The reasons for employing imagery are diverse and often overlapping, affecting physical, affective (psychological) and cognitive domains. This paper argues that when dance uses imagery, it is mapping aspects of the world to the body via analogy. Such mapping informs and changes our understanding of both our bodies and the world. In this way, mental imagery use in dance is fundamentally a process of embodied cognition
  • Fitch, W. T., Friederici, A. D., & Hagoort, P. (Eds.). (2012). Pattern perception and computational complexity [Special Issue]. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598).
  • Fitch, W. T., Friederici, A. D., & Hagoort, P. (2012). Pattern perception and computational complexity: Introduction to the special issue. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 367 (1598), 1925-1932. doi:10.1098/rstb.2012.0099.

    Abstract

    Research on pattern perception and rule learning, grounded in formal language theory (FLT) and using artificial grammar learning paradigms, has exploded in the last decade. This approach marries empirical research conducted by neuroscientists, psychologists and ethologists with the theory of computation and FLT, developed by mathematicians, linguists and computer scientists over the last century. Of particular current interest are comparative extensions of this work to non-human animals, and neuroscientific investigations using brain imaging techniques. We provide a short introduction to the history of these fields, and to some of the dominant hypotheses, to help contextualize these ongoing research programmes, and finally briefly introduce the papers in the current issue.
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • Floyd, S. (2012). Book review of [Poeticas de vida en espacios de muerte: Ge´ nero, poder y estado en la contidianeidad warao [Poetics of life in spaces of death: Gender, power and the state in Warao everyday life] Charles L. Briggs. Quito, Ecuador: Abya Yala, 2008. 460 pp.]. American Anthropologist, 114, 543 -544. doi:10.1111/j.1548-1433.2012.01461_1.x.

    Abstract

    No abstract is available for this article
  • Floyd, S. (2017). Requesting as a means for negotiating distributed agency. In N. J. Enfield, & P. Kockelman (Eds.), Distributed Agency (pp. 67-78). Oxford: Oxford University Press.
  • Fonteijn, H. M., Modat, M., Clarkson, M. J., Barnes, J., Lehmann, M., Hobbs, N. Z., Scahill, R. I., Tabrizi, S. J., Ourselin, S., Fox, N. C., & Alexander, D. C. (2012). An event-based model for disease progression and its application in familial Alzheimer's disease and Huntington's disease. NeuroImage, 60, 1880-1889. doi:10.1016/j.neuroimage.2012.01.062.

    Abstract

    Understanding the progression of neurological diseases is vital for accurate and early diagnosis and treatment planning. We introduce a new characterization of disease progression, which describes the disease as a series of events, each comprising a significant change in patient state. We provide novel algorithms to learn the event ordering from heterogeneous measurements over a whole patient cohort and demonstrate using combined imaging and clinical data from familial-Alzheimer's and Huntington's disease cohorts. Results provide new detail in the progression pattern of these diseases, while confirming known features, and give unique insight into the variability of progression over the cohort. The key advantage of the new model and algorithms over previous progression models is that they do not require a priori division of the patients into clinical stages. The model and its formulation extend naturally to a wide range of other diseases and developmental processes and accommodate cross-sectional and longitudinal input data.
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.

    Additional information

    supplementary information
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across
    languages (cognates) are easier to process than words that share only meaning. This facilitatory
    phenomenon is known as the cognate effect. Most previous studies have shown this effect visually,
    whereas the auditory modality as well as the interplay between type of similarity and modality
    remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out
    a lexical decision task in their second language, both visually and auditorily. Words had high or low
    phonological and orthographic similarity, fully crossed. We also included orthographically identical
    words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic
    similarity in the visual modality and phonological similarity in the auditory modality) leads to
    improved signal detection, whereas similarity across modalities hinders it. We provide support for
    the idea that perfect cognates are a special category within cognates. Results suggest a need for a
    conceptual and practical separation between types of similarity in cognate studies. The theoretical
    implication is that the representations of items are active in both modalities of the non‑target
    language during language processing, which needs to be incorporated to our current processing
    models.
  • Frances, C. (2021). Semantic richness, semantic context, and language learning. PhD Thesis, Universidad del País Vasco-Euskal Herriko Unibertsitatea, Donostia.

    Abstract

    As knowing a foreign language becomes a necessity in the modern world, a large portion of
    the population is faced with the challenge of learning a language in a classroom. This, in turn,
    presents a unique set of difficulties. Acquiring a language with limited and artificial exposure makes
    learning new information and vocabulary particularly difficult. The purpose of this thesis is to help us
    understand how we can compensate—at least partially—for these difficulties by presenting
    information in a way that aids learning. In particular, I focused on variables that affect semantic
    richness—meaning the amount and variability of information associated with a word. Some factors
    that affect semantic richness are intrinsic to the word and others pertain to that word’s relationship
    with other items and information. This latter group depends on the context around the to-be-
    learned items rather than the words themselves. These variables are easier to manipulate than
    intrinsic qualities, making them more accessible tools for teaching and understanding learning. I
    focused on two factors: emotionality of the surrounding semantic context and contextual diversity.
    Publication 1 (Frances, de Bruin, et al., 2020b) focused on content learning in a foreign
    language and whether the emotionality—positive or neutral—of the semantic context surrounding
    key information aided its learning. This built on prior research that showed a reduction in
    emotionality in a foreign language. Participants were taught information embedded in either
    positive or neutral semantic contexts in either their native or foreign language. When they were
    then tested on these embedded facts, participants’ performance decreased in the foreign language.
    But, more importantly, they remembered better the information from the positive than the neutral
    semantic contexts.
    In Publication 2 (Frances, de Bruin, et al., 2020a), I focused on how emotionality affected
    vocabulary learning. I taught participants the names of novel items described either in positive or
    neutral terms in either their native or foreign language. Participants were then asked to recall and
    recognize the object's name—when cued with its image. The effects of language varied with the
    difficulty of the task—appearing in recall but not recognition tasks. Most importantly, learning the
    words in a positive context improved learning, particularly of the association between the image of
    the object and its name.
    In Publication 3 (Frances, Martin, et al., 2020), I explored the effects of contextual
    diversity—namely, the number of texts a word appears in—on native and foreign language word
    learning. Participants read several texts that had novel pseudowords. The total number of
    encounters with the novel words was held constant, but they appeared in 1, 2, 4, or 8 texts in either
    their native or foreign language. Increasing contextual diversity—i.e., the number of texts a word
    appeared in—improved recall and recognition, as well as the ability to match the word with its
    meaning. Using a foreign language only affected performance when participants had to quickly
    identify the meaning of the word.
    Overall, I found that the tested contextual factors related to semantic richness—i.e.,
    emotionality of the semantic context and contextual diversity—can be manipulated to improve
    learning in a foreign language. Using positive emotionality not only improved learning in the foreign
    language, but it did so to the same extent as in the native language. On a theoretical level, this
    suggests that the reduction in emotionality in a foreign language is not ubiquitous and might relate
    to the way in which that language as learned.
    The third article shows an experimental manipulation of contextual diversity and how this
    can affect learning of a lexical item, even if the amount of information known about the item is kept
    constant. As in the case of emotionality, the effects of contextual diversity were also the same
    between languages. Although deducing words from context is dependent on vocabulary size, this
    does not seem to hinder the benefits of contextual diversity in the foreign language.
    Finally, as a whole, the articles contained in this compendium provide evidence that some
    aspects of semantic richness can be manipulated contextually to improve learning and memory. In
    addition, the effects of these factors seem to be independent of language status—meaning, native
    or foreign—when learning new content. This suggests that learning in a foreign and a native
    language is not as different as I initially hypothesized, allowing us to take advantage of native
    language learning tools in the foreign language, as well.
  • Franceschini, R. (2012). Wolfgang Klein und die LiLi [Laudatio]. Zeitschrift für Literaturwissenschaft und Linguistik, 42(168), 5-7.
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421-435. doi:10.1111/infa.12182.

    Abstract

    The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.

    Abstract

    One of the most daunting tasks of a listener is to map a
    continuous auditory stream onto known speech sound
    categories and lexical items. A major issue with this mapping
    problem is the variability in the acoustic realizations of sound
    categories, both within and across speakers. Past research has
    suggested listeners may use visual information (e.g., lipreading)
    to calibrate these speech categories to the current
    speaker. Previous studies have focused on audiovisual
    recalibration of consonant categories. The present study
    explores whether vowel categorization, which is known to show
    less sharply defined category boundaries, also benefit from
    visual cues.
    Participants were exposed to videos of a speaker
    pronouncing one out of two vowels, paired with audio that was
    ambiguous between the two vowels. After exposure, it was
    found that participants had recalibrated their vowel categories.
    In addition, individual variability in audiovisual recalibration is
    discussed. It is suggested that listeners’ category sharpness may
    be related to the weight they assign to visual information in
    audiovisual speech perception. Specifically, listeners with less
    sharp categories assign more weight to visual information
    during audiovisual speech recognition.
  • Franken, M. K., Huizinga, C. S. M., & Schiller, N. O. (2012). De grafemische buffer: Aspecten van een spellingstoornis. Stem- Spraak- en Taalpathologie, 17(3), 17-36.

    Abstract

    A spelling disorder that received much attention recently is the so-called graphemic buffer impairment. Caramazza et al. (1987) presented the first systematic case study of a patient with this disorder. Miceli & Capasso (2006) provide an extensive overview of the relevant literature. This article adds to the literature by describing a Dutch case, i.e. patient BM. We demonstrate how specific features of Dutch and Dutch orthography interact with the graphemic buffer impairment. In addition, we paid special attention to the influence of grapheme position on the patient’s spelling accuracy. For this we used, in contrast with most of the previous literature, the proportional accountability method described in Machtynger & Shallice (2009). We show that by using this method the underlying error distribution can be more optimally captured than with classical methods. The result of this analysis replicates two distributions that have been previously reported in the literature. Finally, attention will be paid to the role of phonology in the described disorder.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • French, C. A., Jin, X., Campbell, T. G., Gerfen, E., Groszer, M., Fisher, S. E., & Costa, R. M. (2012). An aetiological Foxp2 mutation causes aberrant striatal activity and alters plasticity during skill learning. Molecular Psychiatry, 17, 1077-1085. doi:10.1038/mp.2011.105.

    Abstract

    Mutations in the human FOXP2 gene cause impaired speech development and linguistic deficits, which have been best characterised in a large pedigree called the KE family. The encoded protein is highly conserved in many vertebrates and is expressed in homologous brain regions required for sensorimotor integration and motor-skill learning, in particular corticostriatal circuits. Independent studies in multiple species suggest that the striatum is a key site of FOXP2 action. Here, we used in vivo recordings in awake-behaving mice to investigate the effects of the KE-family mutation on the function of striatal circuits during motor-skill learning. We uncovered abnormally high ongoing striatal activity in mice carrying an identical mutation to that of the KE family. Furthermore, there were dramatic alterations in striatal plasticity during the acquisition of a motor skill, with most neurons in mutants showing negative modulation of firing rate, starkly contrasting with the predominantly positive modulation seen in control animals. We also observed striking changes in the temporal coordination of striatal firing during motor-skill learning in mutants. Our results indicate that FOXP2 is critical for the function of striatal circuits in vivo, which are important not only for speech but also for other striatal-dependent skills.

    Additional information

    French_2011_Supplementary_Info.pdf
  • Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S. Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S., Weis, S., Wilson, C., Xu, T., Zerbi, V., Eickoff, S. B., Margulies, D., Mars, R., & Thiebaut de Schotten, M. (2021). Imaging evolution of the primate brain: The next frontier? NeuroImage, 228: 117685. doi:10.1016/j.neuroimage.2020.117685.

    Abstract

    Evolution, as we currently understand it, strikes a delicate balance between animals' ancestral history and adaptations to their current niche. Similarities between species are generally considered inherited from a common ancestor whereas observed differences are considered as more recent evolution. Hence comparing species can provide insights into the evolutionary history. Comparative neuroimaging has recently emerged as a novel subdiscipline, which uses magnetic resonance imaging (MRI) to identify similarities and differences in brain structure and function across species. Whereas invasive histological and molecular techniques are superior in spatial resolution, they are laborious, post-mortem, and oftentimes limited to specific species. Neuroimaging, by comparison, has the advantages of being applicable across species and allows for fast, whole-brain, repeatable, and multi-modal measurements of the structure and function in living brains and post-mortem tissue. In this review, we summarise the current state of the art in comparative anatomy and function of the brain and gather together the main scientific questions to be explored in the future of the fascinating new field of brain evolution derived from comparative neuroimaging.
  • Frost, R. L. A., Monaghan, P., & Tatsumi, T. (2017). Domain-general mechanisms for speech segmentation: The role of duration information in language learning. Journal of Experimental Psychology: Human Perception and Performance, 43(3), 466-476. doi:10.1037/xhp0000325.

    Abstract

    Speech segmentation is supported by multiple sources of information that may either inform language processing specifically, or serve learning more broadly. The Iambic/Trochaic Law (ITL), where increased duration indicates the end of a group and increased emphasis indicates the beginning of a group, has been proposed as a domain-general mechanism that also applies to language. However, language background has been suggested to modulate use of the ITL, meaning that these perceptual grouping preferences may instead be a consequence of language exposure. To distinguish between these accounts, we exposed native-English and native-Japanese listeners to sequences of speech (Experiment 1) and nonspeech stimuli (Experiment 2), and examined segmentation using a 2AFC task. Duration was manipulated over 3 conditions: sequences contained either an initial-item duration increase, or a final-item duration increase, or items of uniform duration. In Experiment 1, language background did not affect the use of duration as a cue for segmenting speech in a structured artificial language. In Experiment 2, the same results were found for grouping structured sequences of visual shapes. The results are consistent with proposals that duration information draws upon a domain-general mechanism that can apply to the special case of language acquisition
  • Frost, R. L. A., & Casillas, M. (2021). Investigating statistical learning of nonadjacent dependencies: Running statistical learning tasks in non-WEIRD populations. In SAGE Research Methods Cases. doi:10.4135/9781529759181.

    Abstract

    Language acquisition is complex. However, one thing that has been suggested to help learning is the way that information is distributed throughout language; co-occurrences among particular items (e.g., syllables and words) have been shown to help learners discover the words that a language contains and figure out how those words are used. Humans’ ability to draw on this information—“statistical learning”—has been demonstrated across a broad range of studies. However, evidence from non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies is critically lacking, which limits theorizing on the universality of this skill. We extended work on statistical language learning to a new, non-WEIRD linguistic population: speakers of Yélî Dnye, who live on a remote island off mainland Papua New Guinea (Rossel Island). We performed a replication of an existing statistical learning study, training adults on an artificial language with statistically defined words, then examining what they had learnt using a two-alternative forced-choice test. Crucially, we implemented several key amendments to the original study to ensure the replication was suitable for remote field-site testing with speakers of Yélî Dnye. We made critical changes to the stimuli and materials (to test speakers of Yélî Dnye, rather than English), the instructions (we re-worked these significantly, and added practice tasks to optimize participants’ understanding), and the study format (shifting from a lab-based to a portable tablet-based setup). We discuss the requirement for acute sensitivity to linguistic, cultural, and environmental factors when adapting studies to test new populations.

  • Frost, R. L. A., Gaskell, G., Warker, J., Guest, J., Snowdon, R., & Stackhouse, A. (2012). Sleep Facilitates Acquisition of Implicit Phonotactic Constraints in Speech Production. Journal of sleep research, 21(s1), 249-249. doi:10.1111/j.1365-2869.2012.01044.x.

    Abstract

    Sleep plays an important role in neural reorganisation which underpins memory consolidation. The gradual replacement of
    hippocampal binding of new memories with intracortical connections helps to link new memories to existing knowledge. This process appears to be faster for memories which fit more easily into existing schemas. Here we seek to investigate whether this more rapid consolidation of schema-conformant information is facilitated by
    sleep, and the neural basis of this process.
  • Frost, R. L. A., & Monaghan, P. (2017). Sleep-driven computations in speech processing. PLoS One, 12(1): e0169538. doi:10.1371/journal.pone.0169538.

    Abstract

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation.

    Additional information

    Data available
  • De la Fuente, J., Santiago, J., Roma, A., Dumitrache, C., & Casasanto, D. (2012). Facing the past: cognitive flexibility in the front-back mapping of time [Abstract]. Cognitive Processing; Special Issue "ICSC 2012, the 5th International Conference on Spatial Cognition: Space and Embodied Cognition". Poster Presentations, 13(Suppl. 1), S58.

    Abstract

    In many languages the future is in front and the past behind, but in some cultures (like Aymara) the past is in front. Is it possible to find this mapping as an alternative conceptualization of time in other cultures? If so, what are the factors that affect its choice out of the set of available alternatives? In a paper and pencil task, participants placed future or past events either in front or behind a character (a schematic head viewed from above). A sample of 24 Islamic participants (whose language also places the future in front and the past behind) tended to locate the past event in the front box more often than Spanish participants. This result might be due to the greater cultural value assigned to tradition in Islamic culture. The same pattern was found in a sample of Spanish elders (N = 58), what may support that conclusion. Alternatively, the crucial factor may be the amount of attention paid to the past. In a final study, young Spanish adults (N = 200) who had just answered a set of questions about their past showed the past-in-front pattern, whereas questions about their future exacerbated the future-in-front pattern. Thus, the attentional explanation was supported: attended events are mapped to front space in agreement with the experiential connection between attending and seeing. When attention is paid to the past, it tends to occupy the front location in spite of available alternative mappings in the language-culture.
  • Furman, R. (2012). Caused motion events in Turkish: Verbal and gestural representation in adults and children. PhD Thesis, Radboud University Nijmegen/LOT.

    Abstract

    Caused motion events (e.g. a boy pulls a box into a room) are basic events where an Agent (the boy) performs an Action (pulling) that causes a Figure (box) to move in a spatial Path (into) to a Goal (the room). These semantic elements are mapped onto lexical and syntactic structures differently across languages This dissertation investigates the encoding of caused motion events in Turkish, and the development of this encoding in speech and gesture. First, a linguistic analysis shows that Turkish does not fully fit into the expected typological patterns, and that the encoding of caused motion is determined by the fine-grained lexical semantics of a verb as well as the syntactic construction the verb is integrated into. A grammaticality judgment study conducted with adult Turkish speakers further establishes the fundamentals of the encoding patterns. An event description study compares adults’ verbal and gestural representations of caused motion to those of children aged 3 to 5. The findings indicate that although language-specificity is evident in children’s speech and gestures, the development of adult patterns takes time and occurs after the age of 5. A final study investigates a longitudinal video corpus of the spontaneous speech of Turkish-speaking children aged 1 to 3, and finds that language-specificity is evident from the start in both children’s speech and gesture. Apart from contributing to the literature on the development of Turkish, this dissertation furthers our understanding of the interaction between language-specificity and the multimodal expression of semantic information in event descriptions.
  • Fusaroli, R., Tylén, K., Garly, K., Steensig, J., Christiansen, M. H., & Dingemanse, M. (2017). Measures and mechanisms of common ground: Backchannels, conversational repair, and interactive alignment in free and task-oriented social interactions. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2055-2060). Austin, TX: Cognitive Science Society.

    Abstract

    A crucial aspect of everyday conversational interactions is our ability to establish and maintain common ground. Understanding the relevant mechanisms involved in such social coordination remains an important challenge for cognitive science. While common ground is often discussed in very general terms, different contexts of interaction are likely to afford different coordination mechanisms. In this paper, we investigate the presence and relation of three mechanisms of social coordination – backchannels, interactive alignment and conversational repair – across free and task-oriented conversations. We find significant differences: task-oriented conversations involve higher presence of repair – restricted offers in particular – and backchannel, as well as a reduced level of lexical and syntactic alignment. We find that restricted repair is associated with lexical alignment and open repair with backchannels. Our findings highlight the need to explicitly assess several mechanisms at once and to investigate diverse activities to understand their role and relations.
  • Gaby, A. (2012). The Thaayorre lexicon of putting and taking. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 233-252). Amsterdam: Benjamins.

    Abstract

    This paper investigates the lexical semantics and relative distributions of verbs describing putting and taking events in Kuuk Thaayorre, a Pama-Nyungan language of Cape York (Australia). Thaayorre put/take verbs can be subcategorised according to whether they may combine with an NP encoding a goal, an NP encoding a source, or both. Goal NPs are far more frequent in natural discourse: initial analysis shows 85% of goal-oriented verb tokens to be accompanied by a goal NP, while only 31% of source-oriented verb tokens were accompanied by a source. This finding adds weight to Ikegami’s (1987) assertion of the conceptual primacy of goals over sources, reflected in a cross-linguistic dissymmetry whereby goal-marking is less marked and more widely used than source-marking.
  • Galke, L., Franke, B., Zielke, T., & Scherp, A. (2021). Lifelong learning of graph neural networks for open-world node classification. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN). Piscataway, NJ: IEEE. doi:10.1109/IJCNN52387.2021.9533412.

    Abstract

    Graph neural networks (GNNs) have emerged as the standard method for numerous tasks on graph-structured data such as node classification. However, real-world graphs are often evolving over time and even new classes may arise. We model these challenges as an instance of lifelong learning, in which a learner faces a sequence of tasks and may take over knowledge acquired in past tasks. Such knowledge may be stored explicitly as historic data or implicitly within model parameters. In this work, we systematically analyze the influence of implicit and explicit knowledge. Therefore, we present an incremental training method for lifelong learning on graphs and introduce a new measure based on k-neighborhood time differences to address variances in the historic data. We apply our training method to five representative GNN architectures and evaluate them on three new lifelong node classification datasets. Our results show that no more than 50% of the GNN's receptive field is necessary to retain at least 95% accuracy compared to training over the complete history of the graph data. Furthermore, our experiments confirm that implicit knowledge becomes more important when fewer explicit knowledge is available.
  • Galke, L., Seidlmayer, E., Lüdemann, G., Langnickel, L., Melnychuk, T., Förstner, K. U., Tochtermann, K., & Schultz, C. (2021). COVID-19++: A citation-aware Covid-19 dataset for the analysis of research dynamics. In Y. Chen, H. Ludwig, Y. Tu, U. Fayyad, X. Zhu, X. Hu, S. Byna, X. Liu, J. Zhang, S. Pan, V. Papalexakis, J. Wang, A. Cuzzocrea, & C. Ordonez (Eds.), Proceedings of the 2021 IEEE International Conference on Big Data (pp. 4350-4355). Piscataway, NJ: IEEE.

    Abstract

    COVID-19 research datasets are crucial for analyzing research dynamics. Most collections of COVID-19 research items do not to include cited works and do not have annotations
    from a controlled vocabulary. Starting with ZB MED KE data on COVID-19, which comprises CORD-19, we assemble a new dataset that includes cited work and MeSH annotations for all records. Furthermore, we conduct experiments on the analysis of research dynamics, in which we investigate predicting links in a co-annotation graph created on the basis of the new dataset. Surprisingly, we find that simple heuristic methods are better at
    predicting future links than more sophisticated approaches such as graph neural networks.
  • Galke, L., Mai, F., Schelten, A., Brunch, D., & Scherp, A. (2017). Using titles vs. full-text as source for automated semantic document annotation. In O. Corcho, K. Janowicz, G. Rizz, I. Tiddi, & D. Garijo (Eds.), Proceedings of the 9th International Conference on Knowledge Capture (K-CAP 2017). New York: ACM.

    Abstract

    We conduct the first systematic comparison of automated semantic
    annotation based on either the full-text or only on the title metadata
    of documents. Apart from the prominent text classification baselines
    kNN and SVM, we also compare recent techniques of Learning
    to Rank and neural networks and revisit the traditional methods
    logistic regression, Rocchio, and Naive Bayes. Across three of our
    four datasets, the performance of the classifications using only titles
    reaches over 90% of the quality compared to the performance when
    using the full-text.
  • Galke, L., Saleh, A., & Scherp, A. (2017). Word embeddings for practical information retrieval. In M. Eibl, & M. Gaedke (Eds.), INFORMATIK 2017 (pp. 2155-2167). Bonn: Gesellschaft für Informatik. doi:10.18420/in2017_215.

    Abstract

    We assess the suitability of word embeddings for practical information retrieval scenarios. Thus, we assume that users issue ad-hoc short queries where we return the first twenty retrieved documents after applying a boolean matching operation between the query and the documents. We compare the performance of several techniques that leverage word embeddings in the retrieval models to compute the similarity between the query and the documents, namely word centroid similarity, paragraph vectors, Word Mover’s distance, as well as our novel inverse document frequency (IDF) re-weighted word centroid similarity. We evaluate the performance using the ranking metrics mean average precision, mean reciprocal rank, and normalized discounted cumulative gain. Additionally, we inspect the retrieval models’ sensitivity to document length by using either only the title or the full-text of the documents for the retrieval task. We conclude that word centroid similarity is the best competitor to state-of-the-art retrieval models. It can be further improved by re-weighting the word frequencies with IDF before aggregating the respective word vectors of the embedding. The proposed cosine similarity of IDF re-weighted word vectors is competitive to the TF-IDF baseline and even outperforms it in case of the news domain with a relative percentage of 15%.
  • Ganushchak, L. Y., Krott, A., & Meyer, A. S. (2012). From gr8 to great: Lexical access to SMS shortcuts. Frontiers in Psychology, 3, 150. doi:10.3389/fpsyg.2012.00150.

    Abstract

    Many contemporary texts include shortcuts, such as cu or phones4u. The aim of this study was to investigate how the meanings of shortcuts are retrieved. A primed lexical decision paradigm was used with shortcuts and the corresponding words as primes. The target word was associatively related to the meaning of the whole prime (cu/see you – goodbye), to a component of the prime (cu/see you – look), or unrelated to the prime. In Experiment 1, primes were presented for 57 ms. For both word and shortcut primes, responses were faster to targets preceded by whole-related than by unrelated primes. No priming from component-related primes was found. In Experiment 2, the prime duration was 1000 ms. The priming effect seen in Experiment 1 was replicated. Additionally, there was priming from component-related word primes, but not from component-related shortcut primes. These results indicate that the meanings of shortcuts can be retrieved without translating them first into corresponding words.
  • Gao, X., Levinthal, B. R., & Stine-Morrow, E. A. L. (2012). The effects of ageing and visual noise on conceptual integration during sentence reading. Quarterly journal of experimental psychology, 65(9), 1833-1847. doi:10.1080/17470218.2012.674146.

    Abstract

    The effortfulness hypothesis implies that difficulty in decoding the surface form, as in the case of age-related sensory limitations or background noise, consumes the attentional resources that are then unavailable for semantic integration in language comprehension. Because ageing is associated with sensory declines, degrading of the surface form by a noisy background can pose an extra challenge for older adults. In two experiments, this hypothesis was tested in a self-paced moving window paradigm in which younger and older readers' online allocation of attentional resources to surface decoding and semantic integration was measured as they read sentences embedded in varying levels of visual noise. When visual noise was moderate (Experiment 1), resource allocation among young adults was unaffected but older adults allocated more resources to decode the surface form at the cost of resources that would otherwise be available for semantic processing; when visual noise was relatively intense (Experiment 2), both younger and older participants allocated more attention to the surface form and less attention to semantic processing. The decrease in attentional allocation to semantic integration resulted in reduced recall of core ideas in both experiments, suggesting that a less organized semantic representation was constructed in noise. The greater vulnerability of older adults at relatively low levels of noise is consistent with the effortfulness hypothesis.
  • Garcia, R., Garrido Rodriguez, G., & Kidd, E. (2021). Developmental effects in the online use of morphosyntactic cues in sentence processing: Evidence from Tagalog. Cognition, 216: 104859. doi:10.1016/j.cognition.2021.104859.

    Abstract

    Children must necessarily process their input in order to learn it, yet the architecture of the developing parsing system and how it interfaces with acquisition is unclear. In the current paper we report experimental and corpus data investigating adult and children's use of morphosyntactic cues for making incremental online predictions of thematic roles in Tagalog, a verb-initial symmetrical voice language of the Philippines. In Study 1, Tagalog-speaking adults completed a visual world eye-tracking experiment in which they viewed pictures of causative actions that were described by transitive sentences manipulated for voice and word order. The pattern of results showed that adults process agent and patient voice differently, predicting the upcoming noun in the patient voice but not in the agent voice, consistent with the observation of a patient voice preference in adult sentence production. In Study 2, our analysis of a corpus of child-directed speech showed that children heard more patient voice- than agent voice-marked verbs. In Study 3, 5-, 7-, and 9-year-old children completed a similar eye-tracking task as used in Study 1. The overall pattern of results suggested that, like the adults in Study 1, children process agent and patient voice differently in a manner that reflects the input distributions, with children developing towards the adult state across early childhood. The results are most consistent with theoretical accounts that identify a key role for input distributions in acquisition and language processing

    Additional information

    1-s2.0-S001002772100278X-mmc1.docx
  • Gaspard III, J. C., Bauer, G. B., Mann, D. A., Boerner, K., Denum, L., Frances, C., & Reep, R. L. (2017). Detection of hydrodynamic stimuli by the postcranial body of Florida manatees (Trichechus manatus latirostris) A Neuroethology, sensory, neural, and behavioral physiology. Journal of Comparative Physiology, 203, 111-120. doi:10.1007/s00359-016-1142-8.

    Abstract

    Manatees live in shallow, frequently turbid
    waters. The sensory means by which they navigate in these
    conditions are unknown. Poor visual acuity, lack of echo-
    location, and modest chemosensation suggest that other
    modalities play an important role. Rich innervation of sen-
    sory hairs that cover the entire body and enlarged soma-
    tosensory areas of the brain suggest that tactile senses are
    good candidates. Previous tests of detection of underwater
    vibratory stimuli indicated that they use passive movement
    of the hairs to detect particle displacements in the vicinity
    of a micron or less for frequencies from 10 to 150 Hz. In
    the current study, hydrodynamic stimuli were created by
    a sinusoidally oscillating sphere that generated a dipole
    field at frequencies from 5 to 150 Hz. Go/no-go tests of
    manatee postcranial mechanoreception of hydrodynamic
    stimuli indicated excellent sensitivity but about an order of
    magnitude less than the facial region. When the vibrissae
    were trimmed, detection thresholds were elevated, suggest-
    ing that the vibrissae were an important means by which
    detection occurred. Manatees were also highly accurate in two-choice directional discrimination: greater than 90%
    correct at all frequencies tested. We hypothesize that mana-
    tees utilize vibrissae as a three-dimensional array to detect
    and localize low-frequency hydrodynamic stimuli
  • Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P. Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P., Huijser, D. C., Sandström, M. S., Herholz, P., Nastase, S. A., Badhwar, A., Dumas, G., Schwab, S., Moia, S., Dayan, M., Bassil, Y., Brooks, P. P., Mancini, M., Shine, J. M., O’Connor, D., Xie, X., Poggiali, D., Friedrich, P., Heinsfeld, A. S., Riedl, L., Toro, R., Caballero-Gaudes, C., Eklund, A., Garner, K. G., Nolan, C. R., Demeter, D. V., Barrios, F. A., Merchant, J. S., McDevitt, E. A., Oostenveld, R., Craddock, R. C., Rokem, A., Doyle, A., Ghosh, S. S., Nikolaidis, A., Stanley, O. W., Uruñuela, E., Anousheh, N., Arnatkeviciute, A., Auzias, G., Bachar, D., Bannier, E., Basanisi, R., Basavaraj, A., Bedini, M., Bellec, P., Benn, R. A., Berluti, K., Bollmann, S., Bollmann, S., Bradley, C., Brown, J., Buchweitz, A., Callahan, P., Chan, M. Y., Chandio, B. Q., Cheng, T., Chopra, S., Chung, A. W., Close, T. G., Combrisson, E., Cona, G., Constable, R. T., Cury, C., Dadi, K., Damasceno, P. F., Das, S., De Vico Fallani, F., DeStasio, K., Dickie, E. W., Dorfschmidt, L., Duff, E. P., DuPre, E., Dziura, S., Esper, N. B., Esteban, O., Fadnavis, S., Flandin, G., Flannery, J. E., Flournoy, J., Forkel, S. J., Franco, A. R., Ganesan, S., Gao, S., García Alanis, J. C., Garyfallidis, E., Glatard, T., Glerean, E., Gonzalez-Castillo, J., Gould van Praag, C. D., Greene, A. S., Gupta, G., Hahn, C. A., Halchenko, Y. O., Handwerker, D., Hartmann, T. S., Hayot-Sasson, V., Heunis, S., Hoffstaedter, F., Hohmann, D. M., Horien, C., Ioanas, H.-I., Iordan, A., Jiang, C., Joseph, M., Kai, J., Karakuzu, A., Kennedy, D. N., Keshavan, A., Khan, A. R., Kiar, G., Klink, P. C., Koppelmans, V., Koudoro, S., Laird, A. R., Langs, G., Laws, M., Licandro, R., Liew, S.-L., Lipic, T., Litinas, K., Lurie, D. J., Lussier, D., Madan, C. R., Mais, L.-T., Mansour L, S., Manzano-Patron, J., Maoutsa, D., Marcon, M., Margulies, D. S., Marinato, G., Marinazzo, D., Markiewicz, C. J., Maumet, C., Meneguzzi, F., Meunier, D., Milham, M. P., Mills, K. L., Momi, D., Moreau, C. A., Motala, A., Moxon-Emre, I., Nichols, T. E., Nielson, D. M., Nilsonne, G., Novello, L., O’Brien, C., Olafson, E., Oliver, L. D., Onofrey, J. A., Orchard, E. R., Oudyk, K., Park, P. J., Parsapoor, M., Pasquini, L., Peltier, S., Pernet, C. R., Pienaar, R., Pinheiro-Chagas, P., Poline, J.-B., Qiu, A., Quendera, T., Rice, L. C., Rocha-Hidalgo, J., Rutherford, S., Scharinger, M., Scheinost, D., Shariq, D., Shaw, T. B., Siless, V., Simmonite, M., Sirmpilatze, N., Spence, H., Sprenger, J., Stajduhar, A., Szinte, M., Takerkart, S., Tam, A., Tejavibulya, L., Thiebaut de Schotten, M., Thome, I., Tomaz da Silva, L., Traut, N., Uddin, L. Q., Vallesi, A., VanMeter, J. W., Vijayakumar, N., di Oleggio Castello, M. V., Vohryzek, J., Vukojević, J., Whitaker, K. J., Whitmore, L., Wideman, S., Witt, S. T., Xie, H., Xu, T., Yan, C.-G., Yeh, F.-C., Yeo, B. T., & Zuo, X.-N. (2021). Brainhack: Developing a culture of open, inclusive, community-driven neuroscience. Neuron, 109(11), 1769-1775. doi:10.1016/j.neuron.2021.04.001.

    Abstract

    Social factors play a crucial role in the advancement of science. New findings are discussed and theories emerge through social interactions, which usually take place within local research groups and at academic events such as conferences, seminars, or workshops. This system tends to amplify the voices of a select subset of the community—especially more established researchers—thus limiting opportunities for the larger community to contribute and connect. Brainhack (https://brainhack.org/) events (or Brainhacks for short) complement these formats in neuroscience with decentralized 2- to 5-day gatherings, in which participants from diverse backgrounds and career stages collaborate and learn from each other in an informal setting. The Brainhack format was introduced in a previous publication (Cameron Craddock et al., 2016; Figures 1A and 1B). It is inspired by the hackathon model (see glossary in Table 1), which originated in software development and has gained traction in science as a way to bring people together for collaborative work and educational courses. Unlike many hackathons, Brainhacks welcome participants from all disciplines and with any level of experience—from those who have never written a line of code to software developers and expert neuroscientists. Brainhacks additionally replace the sometimes-competitive context of traditional hackathons with a purely collaborative one and also feature informal dissemination of ongoing research through unconferences.

    Additional information

    supplementary information
  • Gebre, B. G., & Wittenburg, P. (2012). Adaptive automatic gesture stroke detection. In J. C. Meister (Ed.), Digital Humanities 2012 Conference Abstracts. University of Hamburg, Germany; July 16–22, 2012 (pp. 458-461).

    Abstract

    Print Friendly XML Gebre, Binyam Gebrekidan, Max Planck Institute for Psycholinguistics, The Netherlands, binyamgebrekidan.gebre [at] mpi.nl Wittenburg, Peter, Max Planck Institute for Psycholinguistics, The Netherlands, peter.wittenburg [at] mpi.nl Introduction Many gesture and sign language researchers manually annotate video recordings to systematically categorize, analyze and explain their observations. The number and kinds of annotations are so diverse and unpredictable that any attempt at developing non-adaptive automatic annotation systems is usually less effective. The trend in the literature has been to develop models that work for average users and for average scenarios. This approach has three main disadvantages. First, it is impossible to know beforehand all the patterns that could be of interest to all researchers. Second, it is practically impossible to find enough training examples for all patterns. Third, it is currently impossible to learn a model that is robustly applicable across all video quality-recording variations.
  • Gebre, B. G., Wittenburg, P., & Lenkiewicz, P. (2012). Towards automatic gesture stroke detection. In N. Calzolari (Ed.), Proceedings of LREC 2012: 8th International Conference on Language Resources and Evaluation (pp. 231-235). European Language Resources Association.

    Abstract

    Automatic annotation of gesture strokes is important for many gesture and sign language researchers. The unpredictable diversity of human gestures and video recording conditions require that we adopt a more adaptive case-by-case annotation model. In this paper, we present a work-in progress annotation model that allows a user to a) track hands/face b) extract features c) distinguish strokes from non-strokes. The hands/face tracking is done with color matching algorithms and is initialized by the user. The initialization process is supported with immediate visual feedback. Sliders are also provided to support a user-friendly adjustment of skin color ranges. After successful initialization, features related to positions, orientations and speeds of tracked hands/face are extracted using unique identifiable features (corners) from a window of frames and are used for training a learning algorithm. Our preliminary results for stroke detection under non-ideal video conditions are promising and show the potential applicability of our methodology.
  • Geipel, I., Lattenkamp, E. Z., Dixon, M. M., Wiegrebe, L., & Page, R. A. (2021). Hearing sensitivity: An underlying mechanism for niche differentiation in gleaning bats. Proceedings of the National Academy of Sciences of the United States of America, 118: e2024943118. doi:10.1073/pnas.2024943118.

    Abstract

    Tropical ecosystems are known for high species diversity. Adaptations permitting niche differentiation enable species to coexist. Historically, research focused primarily on morphological and behavioral adaptations for foraging, roosting, and other basic ecological factors. Another important factor, however, is differences in sensory capabilities. So far, studies mainly have focused on the output of behavioral strategies of predators and their prey preference. Understanding the coexistence of different foraging strategies, however, requires understanding underlying cognitive and neural mechanisms. In this study, we investigate hearing in bats and how it shapes bat species coexistence. We present the hearing thresholds and echolocation calls of 12 different gleaning bats from the ecologically diverse Phyllostomid family. We measured their auditory brainstem responses to assess their hearing sensitivity. The audiograms of these species had similar overall shapes but differed substantially for frequencies below 9 kHz and in the frequency range of their echolocation calls. Our results suggest that differences among bats in hearing abilities contribute to the diversity in foraging strategies of gleaning bats. We argue that differences in auditory sensitivity could be important mechanisms shaping diversity in sensory niches and coexistence of species.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Honbolygó, F., Tóth, D., Csépe, V., Huguet, H., Chaix, Y., Iannuzzi, S., Demonet, J.-F., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C. and 29 moreGialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Honbolygó, F., Tóth, D., Csépe, V., Huguet, H., Chaix, Y., Iannuzzi, S., Demonet, J.-F., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C., Olson, R. K., Smith, S. D., Pennington, B. F., Vaessen, A., Maurer, U., Lyytinen, H., Peyrard-Janvid, M., Leppänen, P. H. T., Brandeis, D., Bonte, M., Stein, J. F., Talcott, J. B., Fauchereau, F., Wilcke, A., Kirsten, H., Müller, B., Francks, C., Bourgeron, T., Monaco, A. P., Ramus, F., Landerl, K., Kere, J., Scerri, T. S., Paracchini, S., Fisher, S. E., Schumacher, J., Nöthen, M. M., Müller-Myhsok, B., & Schulte-Körne, G. (2021). Genome-wide association study reveals new insights into the heritability and genetic correlates of developmental dyslexia. Molecular Psychiatry, 26, 3004-3017. doi:10.1038/s41380-020-00898-x.

    Abstract

    Developmental dyslexia (DD) is a learning disorder affecting the ability to read, with a heritability of 40–60%. A notable part of this heritability remains unexplained, and large genetic studies are warranted to identify new susceptibility genes and clarify the genetic bases of dyslexia. We carried out a genome-wide association study (GWAS) on 2274 dyslexia cases and 6272 controls, testing associations at the single variant, gene, and pathway level, and estimating heritability using single-nucleotide polymorphism (SNP) data. We also calculated polygenic scores (PGSs) based on large-scale GWAS data for different neuropsychiatric disorders and cortical brain measures, educational attainment, and fluid intelligence, testing them for association with dyslexia status in our sample. We observed statistically significant (p  < 2.8 × 10−6) enrichment of associations at the gene level, for LOC388780 (20p13; uncharacterized gene), and for VEPH1 (3q25), a gene implicated in brain development. We estimated an SNP-based heritability of 20–25% for DD, and observed significant associations of dyslexia risk with PGSs for attention deficit hyperactivity disorder (at pT = 0.05 in the training GWAS: OR = 1.23[1.16; 1.30] per standard deviation increase; p  = 8 × 10−13), bipolar disorder (1.53[1.44; 1.63]; p = 1 × 10−43), schizophrenia (1.36[1.28; 1.45]; p = 4 × 10−22), psychiatric cross-disorder susceptibility (1.23[1.16; 1.30]; p = 3 × 10−12), cortical thickness of the transverse temporal gyrus (0.90[0.86; 0.96]; p = 5 × 10−4), educational attainment (0.86[0.82; 0.91]; p = 2 × 10−7), and intelligence (0.72[0.68; 0.76]; p = 9 × 10−29). This study suggests an important contribution of common genetic variants to dyslexia risk, and novel genomic overlaps with psychiatric conditions like bipolar disorder, schizophrenia, and cross-disorder susceptibility. Moreover, it revealed the presence of shared genetic foundations with a neural correlate previously implicated in dyslexia by neuroimaging evidence.
  • Gialluisi, A., Pippucci, T., Anikster, Y., Ozbek, U., Medlej-Hashim, M., Mégarbané, A., & Romeo, G. (2012). Estimating the allele frequency of autosomal recessive disorders through mutational records and consanguinity: The homozygosity index (HI). Annals of Human Genetics, 76, 159-167. doi:10.1111/j.1469-1809.2011.00693.x.

    Abstract

    In principle mutational records make it possible to estimate frequencies of disease alleles (q) for autosomal recessive disorders using a novel approach based on the calculation of the Homozygosity Index (HI), i.e., the proportion of homozygous patients, which is complementary to the proportion of compound heterozygous patients P(CH). In other words, the rarer the disorder, the higher will be the HI and the lower will be the P(CH). To test this hypothesis we used mutational records of individuals affected with Familial Mediterranean Fever (FMF) and Phenylketonuria (PKU), born to either consanguineous or apparently unrelated parents from six population samples of the Mediterranean region. Despite the unavailability of precise values of the inbreeding coefficient for the general population, which are needed in the case of apparently unrelated parents, our estimates of q are very similar to those of previous descriptive epidemiological studies. Finally, we inferred from simulation studies that the minimum sample size needed to use this approach is 25 patients either with unrelated or first cousin parents. These results show that the HI can be used to produce a ranking order of allele frequencies of autosomal recessive disorders, especially in populations with high rates of consanguineous marriages.
  • Gialluisi, A., Guadalupe, T., Francks, C., & Fisher, S. E. (2017). Neuroimaging genetic analyses of novel candidate genes associated with reading and language. Brain and Language, 172, 9-15. doi:10.1016/j.bandl.2016.07.002.

    Abstract

    Neuroimaging measures provide useful endophenotypes for tracing genetic effects on reading and language. A recent Genome-Wide Association Scan Meta-Analysis (GWASMA) of reading and language skills (N = 1862) identified strongest associations with the genes CCDC136/FLNC and RBFOX2. Here, we follow up the top findings from this GWASMA, through neuroimaging genetics in an independent sample of 1275 healthy adults. To minimize multiple-testing, we used a multivariate approach, focusing on cortical regions consistently implicated in prior literature on developmental dyslexia and language impairment. Specifically, we investigated grey matter surface area and thickness of five regions selected a priori: middle temporal gyrus (MTG); pars opercularis and pars triangularis in the inferior frontal gyrus (IFG-PO and IFG-PT); postcentral parietal gyrus (PPG) and superior temporal gyrus (STG). First, we analysed the top associated polymorphisms from the reading/language GWASMA: rs59197085 (CCDC136/FLNC) and rs5995177 (RBFOX2). There was significant multivariate association of rs5995177 with cortical thickness, driven by effects on left PPG, right MTG, right IFG (both PO and PT), and STG bilaterally. The minor allele, previously associated with reduced reading-language performance, showed negative effects on grey matter thickness. Next, we performed exploratory gene-wide analysis of CCDC136/FLNC and RBFOX2; no other associations surpassed significance thresholds. RBFOX2 encodes an important neuronal regulator of alternative splicing. Thus, the prior reported association of rs5995177 with reading/language performance could potentially be mediated by reduced thickness in associated cortical regions. In future, this hypothesis could be tested using sufficiently large samples containing both neuroimaging data and quantitative reading/language scores from the same individuals.

    Additional information

    mmc1.docx
  • Gisladottir, R. S., Chwilla, D., Schriefers, H., & Levinson, S. C. (2012). Speech act recognition in conversation: Experimental evidence. In N. Miyake, D. Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012) (pp. 1596-1601). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2012/papers/0282/index.html.

    Abstract

    Recognizing the speech acts in our interlocutors’ utterances is a crucial prerequisite for conversation. However, it is not a trivial task given that the form and content of utterances is frequently underspecified for this level of meaning. In the present study we investigate participants’ competence in categorizing speech acts in such action-underspecific sentences and explore the time-course of speech act inferencing using a self-paced reading paradigm. The results demonstrate that participants are able to categorize the speech acts with very high accuracy, based on limited context and without any prosodic information. Furthermore, the results show that the exact same sentence is processed differently depending on the speech act it performs, with reading times starting to differ already at the first word. These results indicate that participants are very good at “getting” the speech acts, opening up a new arena for experimental research on action recognition in conversation.
  • Goodhew, S. C., & Kidd, E. (2017). Language use statistics and prototypical grapheme colours predict synaesthetes' and non-synaesthetes' word-colour associations. Acta Psychologica, 173, 73-86. doi:10.1016/j.actpsy.2016.12.008.

    Abstract

    Synaesthesia is the neuropsychological phenomenon in which individuals experience unusual sensory associations, such as experiencing particular colours in response to particular words. While it was once thought the particular pairings between stimuli were arbitrary and idiosyncratic to particular synaesthetes, there is now growing evidence for a systematic psycholinguistic basis to the associations. Here we sought to assess the explanatory value of quantifiable lexical association measures (via latent semantic analysis; LSA) in the pairings observed between words and colours in synaesthesia. To test this, we had synaesthetes report the particular colours they experienced in response to given concept words, and found that language association between the concept and colour words provided highly reliable predictors of the reported pairings. These results provide convergent evidence for a psycholinguistic basis to synaesthesia, but in a novel way, showing that exposure to particular patterns of associations in language can predict the formation of particular synaesthetic lexical-colour associations. Consistent with previous research, the prototypical synaesthetic colour for the first letter of the word also played a role in shaping the colour for the whole word, and this effect also interacted with language association, such that the effect of the colour for the first letter was stronger as the association between the concept word and the colour word in language increased. Moreover, when a group of non-synaesthetes were asked what colours they associated with the concept words, they produced very similar reports to the synaesthetes that were predicted by both language association and prototypical synaesthetic colour for the first letter of the word. This points to a shared linguistic experience generating the associations for both groups.
  • Gordon, R. L., Ravignani, A., Hyland Bruno, J., Robinson, C. M., Scartozzi, A., Embalabala, R., Niarchou, M., 23andMe Research Team, Cox, N. J., & Creanza, N. (2021). Linking the genomic signatures of human beat synchronization and learned song in birds. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200329. doi:10.1098/rstb.2020.0329.

    Abstract

    The development of rhythmicity is foundational to communicative and social behaviours in humans and many other species, and mechanisms of synchrony could be conserved across species. The goal of the current paper is to explore evolutionary hypotheses linking vocal learning and beat synchronization through genomic approaches, testing the prediction that genetic underpinnings of birdsong also contribute to the aetiology of human interactions with musical beat structure. We combined state-of-the-art-genomic datasets that account for underlying polygenicity of these traits: birdsong genome-wide transcriptomics linked to singing in zebra finches, and a human genome-wide association study of beat synchronization. Results of competitive gene set analysis revealed that the genetic architecture of human beat synchronization is significantly enriched for birdsong genes expressed in songbird Area X (a key nucleus for vocal learning, and homologous to human basal ganglia). These findings complement ethological and neural evidence of the relationship between vocal learning and beat synchronization, supporting a framework of some degree of common genomic substrates underlying rhythm-related behaviours in two clades, humans and songbirds (the largest evolutionary radiation of vocal learners). Future cross-species approaches investigating the genetic underpinnings of beat synchronization in a broad evolutionary context are discussed.

    Additional information

    analysis scripts and variables
  • Goriot, C., Unsworth, S., Van Hout, R. W. N. M., Broersma, M., & McQueen, J. M. (2021). Differences in phonological awareness performance: Are there positive or negative effects of bilingual experience? Linguistic Approaches to Bilingualism, 11(3), 425-460. doi:10.1075/lab.18082.gor.

    Abstract

    Children who have knowledge of two languages may show better phonological awareness than their monolingual peers (e.g. Bruck & Genesee, 1995). It remains unclear how much bilingual experience is needed for such advantages to appear, and whether differences in language or cognitive skills alter the relation between bilingualism and phonological awareness. These questions were investigated in this cross-sectional study. Participants (n = 294; 4–7 year-olds, in the first three grades of primary school) were Dutch-speaking pupils attending mainstream monolingual Dutch primary schools or early-English schools providing English lessons from grade 1, and simultaneous Dutch-English bilinguals. We investigated phonological awareness (rhyming, phoneme blending, onset phoneme identification, and phoneme deletion) and its relation to age, Dutch vocabulary, English vocabulary, working memory and short-term memory, and the balance between Dutch and English vocabulary. Small significant (α < .05) effects of bilingualism were found on onset phoneme identification and phoneme deletion, but post-hoc comparisons revealed no robust pairwise differences between the groups. Furthermore, effects of bilingualism sometimes disappeared when differences in language or memory skills were taken into account. Learning two languages simultaneously is not beneficial to – and importantly, also not detrimental to – phonological awareness.

    Files private

    Request files
  • Goriot, C., Van Hout, R., Broersma, M., Lobo, V., McQueen, J. M., & Unsworth, S. (2021). Using the peabody picture vocabulary test in L2 children and adolescents: Effects of L1. International Journal of Bilingual Education and Bilingualism, 24(4), 546-568. doi:10.1080/13670050.2018.1494131.

    Abstract

    This study investigated to what extent the Peabody Picture Vocabulary Test
    (PPVT-4) is a reliable tool for measuring vocabulary knowledge of English as
    a second language (L2), and to what extent L1 characteristics affect test
    outcomes. The PPVT-4 was administered to Dutch pupils in six different
    age groups (4-15 years old) who were or were not following an English
    educational programme at school. Our first finding was that the PPVT-4
    was not a reliable measure for pupils who were correct on maximally 24
    items, but it was reliable for pupils who performed better. Second, both
    primary-school and secondary-school pupils performed better on items
    for which the phonological similarity between the English word and its
    Dutch translation was higher. Third, young unexperienced L2 learners’
    scores were predicted by Dutch lexical frequency, while older more
    experienced pupils’ scores were predicted by English frequency. These
    findings indicate that the PPVT may be inappropriate for use with L2
    learners with limited L2 proficiency. Furthermore, comparisons of PPVT
    scores across learners with different L1s are confounded by effects of L1
    frequency and L1-L2 similarity. The PPVT-4 is however a suitable measure
    to compare more proficient L2 learners who have the same L1.
  • Goudbeek, M., Smits, R., Cutler, A., & Swingley, D. (2017). Auditory and phonetic category formation. In H. Cohen, & C. Lefebvre (Eds.), Handbook of categorization in cognitive science (2nd revised ed.) (pp. 687-708). Amsterdam: Elsevier.
  • De Graaf, T. A., Duecker, F., Stankevich, Y., Ten Oever, S., & Sack, A. T. (2017). Seeing in the dark: Phosphene thresholds with eyes open versus closed in the absence of visual inputs. Brain Stimulation, 10(4), 828-835. doi:10.1016/j.brs.2017.04.127.

    Abstract

    Background: Voluntarily opening or closing our eyes results in fundamentally different input patterns and expectancies. Yet it remains unclear how our brains and visual systems adapt to these ocular states.
    Objective/Hypothesis: We here used transcranial magnetic stimulation (TMS) to probe the excitability of the human visual system with eyes open or closed, in the complete absence of visual inputs.
    Methods: Combining Bayesian staircase procedures with computer control of TMS pulse intensity allowed interleaved determination of phosphene thresholds (PT) in both conditions. We measured parieto-occipital EEG baseline activity in several stages to track oscillatory power in the alpha (8-12 Hz) frequency-band, which has previously been shown to be inversely related to phosphene perception.
    Results: Since closing the eyes generally increases alpha power, one might have expected a decrease in excitability (higher PT). While we confirmed a rise in alpha power with eyes closed, visual excitability was actually increased (PT was lower) with eyes closed.
    Conclusions: This suggests that, aside from oscillatory alpha power, additional neuronal mechanisms influence the excitability of early visual cortex. One of these may involve a more internally oriented mode of brain operation, engaged by closing the eyes. In this state, visual cortex may be more susceptible to top-down inputs, to facilitate for example multisensory integration or imagery/working memory, although alternative explanations remain possible. (C) 2017 Elsevier Inc. All rights reserved.

    Additional information

    Supplementary data
  • Grabe, E. (1998). Comparative intonational phonology: English and German. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057683.
  • Grabot, L., Kösem, A., Azizi, L., & Van Wassenhove, V. (2017). Prestimulus Alpha Oscillations and the Temporal Sequencing of Audio-visual Events. Journal of Cognitive Neuroscience, 29(9), 1566-1582. doi:10.1162/jocn_a_01145.

    Abstract

    Perceiving the temporal order of sensory events typically depends on participants' attentional state, thus likely on the endogenous fluctuations of brain activity. Using magnetoencephalography, we sought to determine whether spontaneous brain oscillations could disambiguate the perceived order of auditory and visual events presented in close temporal proximity, that is, at the individual's perceptual order threshold (Point of Subjective Simultaneity [PSS]). Two neural responses were found to index an individual's temporal order perception when contrasting brain activity as a function of perceived order (i.e., perceiving the sound first vs. perceiving the visual event first) given the same physical audiovisual sequence. First, average differences in prestimulus auditory alpha power indicated perceiving the correct ordering of audiovisual events irrespective of which sensory modality came first: a relatively low alpha power indicated perceiving auditory or visual first as a function of the actual sequence order. Additionally, the relative changes in the amplitude of the auditory (but not visual) evoked responses were correlated with participant's correct performance. Crucially, the sign of the magnitude difference in prestimulus alpha power and evoked responses between perceived audiovisual orders correlated with an individual's PSS. Taken together, our results suggest that spontaneous oscillatory activity cannot disambiguate subjective temporal order without prior knowledge of the individual's bias toward perceiving one or the other sensory modality first. Altogether, our results suggest that, under high perceptual uncertainty, the magnitude of prestimulus alpha (de)synchronization indicates the amount of compensation needed to overcome an individual's prior in the serial ordering and temporal sequencing of information
  • Greenfield, M. D., Honing, H., Kotz, S. A., & Ravignani, A. (Eds.). (2021). Synchrony and rhythm interaction: From the brain to behavioural ecology [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376.
  • Greenfield, M. D., Honing, H., Kotz, S. A., & Ravignani, A. (2021). Synchrony and rhythm interaction: From the brain to behavioural ecology. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200324. doi:10.1098/rstb.2020.0324.

    Abstract

    This theme issue assembles current studies that ask how and why precise synchronization and related forms of rhythm interaction are expressed in a wide range of behaviour. The studies cover human activity, with an emphasis on music, and social behaviour, reproduction and communication in non-human animals. In most cases, the temporally aligned rhythms have short—from several seconds down to a fraction of a second—periods and are regulated by central nervous system pacemakers, but interactions involving rhythms that are 24 h or longer and originate in biological clocks also occur. Across this spectrum of activities, species and time scales, empirical work and modelling suggest that synchrony arises from a limited number of coupled-oscillator mechanisms with which individuals mutually entrain. Phylogenetic distribution of these common mechanisms points towards convergent evolution. Studies of animal communication indicate that many synchronous interactions between the signals of neighbouring individuals are specifically favoured by selection. However, synchronous displays are often emergent properties of entrainment between signalling individuals, and in some situations, the very signallers who produce a display might not gain any benefit from the collective timing of their production.
  • Greenfield, P. M., Slobin, D., Cole, M., Gardner, H., Sylva, K., Levelt, W. J. M., Lucariello, J., Kay, A., Amsterdam, A., & Shore, B. (2017). Remembering Jerome Bruner: A series of tributes to Jerome “Jerry” Bruner, who died in 2016 at the age of 100, reflects the seminal contributions that led him to be known as a co-founder of the cognitive revolution. Observer, 30(2). Retrieved from http://www.psychologicalscience.org/observer/remembering-jerome-bruner.

    Abstract

    Jerome Seymour “Jerry” Bruner was born on October 1, 1915, in New York City. He began his academic career as psychology professor at Harvard University; he ended it as University Professor Emeritus at New York University (NYU) Law School. What happened at both ends and in between is the subject of the richly variegated remembrances that follow. On June 5, 2016, Bruner died in his Greenwich Village loft at age 100. He leaves behind his beloved partner Eleanor Fox, who was also his distinguished colleague at NYU Law School; his son Whitley; his daughter Jenny; and three grandchildren.

    Bruner’s interdisciplinarity and internationalism are seen in the remarkable variety of disciplines and geographical locations represented in the following tributes. The reader will find developmental psychology, anthropology, computer science, psycholinguistics, cognitive psychology, cultural psychology, education, and law represented; geographically speaking, the writers are located in the United States, Canada, the United Kingdom, and the Netherlands. The memories that follow are arranged in roughly chronological order according to when the writers had their first contact with Jerry Bruner.
  • Greenhill, S. J., Wu, C.-H., Hua, X., Dunn, M., Levinson, S. C., & Gray, R. D. (2017). Evolutionary dynamics of language systems. Proceedings of the National Academy of Sciences of the United States of America, 114(42), E8822-E8829. doi:10.1073/pnas.1700388114.

    Abstract

    Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact.
  • De Gregorio, C., Valente, D., Raimondi, T., Torti, V., Miaretsoa, L., Friard, O., Giacoma, C., Ravignani, A., & Gamba, M. (2021). Categorical rhythms in a singing primate. Current Biology, 31, R1363-R1380. doi:10.1016/j.cub.2021.09.032.

    Abstract

    What are the origins of musical rhythm? One approach to the biology and evolution of music consists in finding common musical traits across species. These similarities allow biomusicologists to infer when and how musical traits appeared in our species1
    . A parallel approach to the biology and evolution of music focuses on finding statistical universals in human music2
    . These include rhythmic features that appear above chance across musical cultures. One such universal is the production of categorical rhythms3
    , defined as those where temporal intervals between note onsets are distributed categorically rather than uniformly2
    ,4
    ,5
    . Prominent rhythm categories include those with intervals related by small integer ratios, such as 1:1 (isochrony) and 1:2, which translates as some notes being twice as long as their adjacent ones. In humans, universals are often defined in relation to the beat, a top-down cognitive process of inferring a temporal regularity from a complex musical scene1
    . Without assuming the presence of the beat in other animals, one can still investigate its downstream products, namely rhythmic categories with small integer ratios detected in recorded signals. Here we combine the comparative and statistical universals approaches, testing the hypothesis that rhythmic categories and small integer ratios should appear in species showing coordinated group singing3
    . We find that a lemur species displays, in its coordinated songs, the isochronous and 1:2 rhythm categories seen in human music, showing that such categories are not, among mammals, unique to humans3

    Additional information

    supplemental information
  • Gretscher, H., Haun, D. B. M., Liebal, K., & Kaminski, J. (2012). Orang-utans rely on orientation cues and egocentric rules when judging others' perspectives in a competitive food task. Animal Behaviour, 84, 323-331. doi:10.1016/j.anbehav.2012.04.021.

    Abstract

    Adopting the paradigm of a study conducted with chimpanzees, Pan troglodytes (Melis et al. 2006, Journal of Comparative Psychology, 120, 154–162), we investigated orang-utans', Pongo pygmaeus, understanding of others' visual perspectives. More specifically, we examined whether orang-utans would adjust their behaviour in a way that prevents a human competitor from seeing them steal a piece of food. In the task, subjects had to reach through one of two opposing Plexiglas tunnels in order to retrieve a food reward. Both rewards were also physically accessible to a human competitor sitting opposite the subject. Subjects always had the possibility of reaching one piece of food that was outside the human's line of sight. This was because either the human was oriented to one, but not the other, reward or because one tunnel was covered by an opaque barrier and the other remained transparent. In the situation in which the human was oriented towards one reward, the orang-utans successfully avoided the tunnel that the competitor was facing. If one tunnel was covered, they marginally preferred to reach through the opaque versus the transparent tunnel. However, they did so frequently after initially inspecting the transparent tunnel (then switching to the opaque one). Considering only the subjects' initial inspections, they chose randomly between the opaque and transparent tunnel, indicating that their final decision to reach was probably driven by a more egocentric behavioural rule. Overall the results suggest that orang-utans have a limited understanding of others' perspectives, relying mainly on cues from facial and bodily orientation and egocentric rules when making such judgements.
  • Grieco-Calub, T. M., Ward, K. M., & Brehm, L. (2017). Multitasking During Degraded Speech Recognition in School-Age Children. Trends in hearing, 21, 1-14. doi:10.1177/2331216516686786.

    Abstract

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unpro- cessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2017). Language-induced visual and semantic biases in visual search are subject to task requirements. Visual Cognition, 25, 225-240. doi:10.1080/13506285.2017.1324934.

    Abstract

    Visual attention is biased by both visual and semantic representations activated by words. We investigated to what extent language-induced visual and semantic biases are subject to task demands. Participants memorized a spoken word for a verbal recognition task, and performed a visual search task during the retention period. Crucially, while the word had to be remembered in all conditions, it was either relevant for the search (as it also indicated the target) or irrelevant (as it only served the memory test afterwards). On critical trials, displays contained objects that were visually or semantically related to the memorized word. When the word was relevant for the search, eye movement biases towards visually related objects arose earlier and more strongly than biases towards semantically related objects. When the word was irrelevant, there was still evidence for visual and semantic biases, but these biases were substantially weaker, and similar in strength and temporal dynamics, without a visual advantage. We conclude that language-induced attentional biases are subject to task requirements.
  • Guadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T. and 141 moreGuadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T., Blangero, J., Bokde, A. L., Boedhoe, P. S., Bose, A., Brem, S., Brodaty, H., Bromberg, U., Brooks, S., Büchel, C., Buitelaar, J., Calhoun, V. D., Cannon, D. M., Cattrell, A., Cheng, Y., Conrod, P. J., Conzelmann, A., Corvin, A., Crespo-Facorro, B., Crivello, F., Dannlowski, U., De Zubicaray, G. I., De Zwarte, S. M., Deary, I. J., Desrivières, S., Doan, N. T., Donohoe, G., Dørum, E. S., Ehrlich, S., Espeseth, T., Fernández, G., Flor, H., Fouche, J.-P., Frouin, V., Fukunaga, M., Gallinat, J., Garavan, H., Gill, M., Suarez, A. G., Gowland, P., Grabe, H. J., Grotegerd, D., Gruber, O., Hagenaars, S., Hashimoto, R., Hauser, T. U., Heinz, A., Hibar, D. P., Hoekstra, P. J., Hoogman, M., Howells, F. M., Hu, H., Hulshoff Pol, H. E.., Huyser, C., Ittermann, B., Jahanshad, N., Jönsson, E. G., Jurk, S., Kahn, R. S., Kelly, S., Kraemer, B., Kugel, H., Kwon, J. S., Lemaitre, H., Lesch, K.-P., Lochner, C., Luciano, M., Marquand, A. F., Martin, N. G., Martínez-Zalacaín, I., Martinot, J.-L., Mataix-Cols, D., Mather, K., McDonald, C., McMahon, K. L., Medland, S. E., Menchón, J. M., Morris, D. W., Mothersill, O., Maniega, S. M., Mwangi, B., Nakamae, T., Nakao, T., Narayanaswaamy, J. C., Nees, F., Nordvik, J. E., Onnink, A. M. H., Opel, N., Ophoff, R., Martinot, M.-L.-P., Orfanos, D. P., Pauli, P., Paus, T., Poustka, L., Reddy, J. Y., Renteria, M. E., Roiz-Santiáñez, R., Roos, A., Royle, N. A., Sachdev, P., Sánchez-Juan, P., Schmaal, L., Schumann, G., Shumskaya, E., Smolka, M. N., Soares, J. C., Soriano-Mas, C., Stein, D. J., Strike, L. T., Toro, R., Turner, J. A., Tzourio-Mazoyer, N., Uhlmann, A., Valdés Hernández, M., Van den Heuvel, O. A., Van der Meer, D., Van Haren, N. E.., Veltman, D. J., Venkatasubramanian, G., Vetter, N. C., Vuletic, D., Walitza, S., Walter, H., Walton, E., Wang, Z., Wardlaw, J., Wen, W., Westlye, L. T., Whelan, R., Wittfeld, K., Wolfers, T., Wright, M. J., Xu, J., Xu, X., Yun, J.-Y., Zhao, J., Franke, B., Thompson, P. M., Glahn, D. C., Mazoyer, B., Fisher, S. E., & Francks, C. (2017). Human subcortical asymmetries in 15,847 people worldwide reveal effects of age and sex. Brain Imaging and Behavior, 11(5), 1497-1514. doi:10.1007/s11682-016-9629-z.

    Abstract

    The two hemispheres of the human brain differ functionally and structurally. Despite over a century of research, the extent to which brain asymmetry is influenced by sex, handedness, age, and genetic factors is still controversial. Here we present the largest ever analysis of subcortical brain asymmetries, in a harmonized multi-site study using meta-analysis methods. Volumetric asymmetry of seven subcortical structures was assessed in 15,847 MRI scans from 52 datasets worldwide. There were sex differences in the asymmetry of the globus pallidus and putamen. Heritability estimates, derived from 1170 subjects belonging to 71 extended pedigrees, revealed that additive genetic factors influenced the asymmetry of these two structures and that of the hippocampus and thalamus. Handedness had no detectable effect on subcortical asymmetries, even in this unprecedented sample size, but the asymmetry of the putamen varied with age. Genetic drivers of asymmetry in the hippocampus, thalamus and basal ganglia may affect variability in human cognition, including susceptibility to psychiatric disorders.

    Additional information

    11682_2016_9629_MOESM1_ESM.pdf

Share this page