Publications

Displaying 301 - 400 of 1398
  • Eviatar, Z., & Huettig, F. (2021). The literate mind. Journal of Cultural Cognitive Science, 5, 81-84. doi:10.1007/s41809-021-00086-5.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Falcaro, M., Pickles, A., Newbury, D. F., Addis, L., Banfield, E., Fisher, S. E., Monaco, A. P., Simkin, Z., Conti-Ramsden, G., & Consortium (2008). Genetic and phenotypic effects of phonological short-term memory and grammatical morphology in specific language impairment. Genes, Brain and Behavior, 7, 393-402. doi:10.1111/j.1601-183X.2007.00364.x.

    Abstract

    Deficits in phonological short-term memory and aspects of verb grammar morphology have been proposed as phenotypic markers of specific language impairment (SLI) with the suggestion that these traits are likely to be under different genetic influences. This investigation in 300 first-degree relatives of 93 probands with SLI examined familial aggregation and genetic linkage of two measures thought to index these two traits, non-word repetition and tense marking. In particular, the involvement of chromosomes 16q and 19q was examined as previous studies found these two regions to be related to SLI. Results showed a strong association between relatives' and probands' scores on non-word repetition. In contrast, no association was found for tense marking when examined as a continuous measure. However, significant familial aggregation was found when tense marking was treated as a binary measure with a cut-off point of -1.5 SD, suggestive of the possibility that qualitative distinctions in the trait may be familial while quantitative variability may be more a consequence of non-familial factors. Linkage analyses supported previous findings of the SLI Consortium of linkage to chromosome 16q for phonological short-term memory and to chromosome 19q for expressive language. In addition, we report new findings that relate to the past tense phenotype. For the continuous measure, linkage was found on both chromosomes, but evidence was stronger on chromosome 19. For the binary measure, linkage was observed on chromosome 19 but not on chromosome 16.
  • Favier, S., & Huettig, F. (2021). Are there core and peripheral syntactic structures? Experimental evidence from Dutch native speakers with varying literacy levels. Lingua, 251: 102991. doi:10.1016/j.lingua.2020.102991.

    Abstract

    Some theorists posit the existence of a ‘core’ grammar that virtually all native speakers acquire, and a ‘peripheral’ grammar that many do not. We investigated the viability of such a categorical distinction in the Dutch language. We first consulted linguists’ intuitions as to the ‘core’ or ‘peripheral’ status of a wide range of grammatical structures. We then tested a selection of core- and peripheral-rated structures on naïve participants with varying levels of literacy experience, using grammaticality judgment as a proxy for receptive knowledge. Overall, participants demonstrated better knowledge of ‘core’ structures than ‘peripheral’ structures, but the considerable variability within these categories was strongly suggestive of a continuum rather than a categorical distinction between them. We also hypothesised that individual differences in the knowledge of core and peripheral structures would reflect participants’ literacy experience. This was supported only by a small trend in our data. The results fit best with the notion that more frequent syntactic structures are mastered by more people than infrequent ones and challenge the received sense of a categorical core-periphery distinction.
  • Favier, S., Meyer, A. S., & Huettig, F. (2021). Literacy can enhance syntactic prediction in spoken language processing. Journal of Experimental Psychology: General, 150(10), 2167-2174. doi:10.1037/xge0001042.

    Abstract

    Language comprehenders can use syntactic cues to generate predictions online about upcoming language. Previous research with reading-impaired adults and healthy, low-proficiency adult and child learners suggests that reading skills are related to prediction in spoken language comprehension. Here we investigated whether differences in literacy are also related to predictive spoken language processing in non-reading-impaired proficient adult readers with varying levels of literacy experience. Using the visual world paradigm enabled us to measure prediction based on syntactic cues in the spoken sentence, prior to the (predicted) target word. Literacy experience was found to be the strongest predictor of target anticipation, independent of general cognitive abilities. These findings suggest that a) experience with written language can enhance syntactic prediction of spoken language in normal adult language users, and b) processing skills can be transferred to related tasks (from reading to listening) if the domains involve similar processes (e.g., predictive dependencies) and representations (e.g., syntactic).

    Additional information

    Online supplementary material
  • Favier, S., & Huettig, F. (2021). Long-term written language experience affects grammaticality judgments and usage but not priming of spoken sentences. Quarterly Journal of Experimental Psychology, 74(8), 1378-1395. doi:10.1177/17470218211005228.

    Abstract

    ‘Book language’ offers a richer linguistic experience than typical conversational speech in terms of its syntactic properties. Here, we investigated the role of long-term syntactic experience on syntactic knowledge and processing. In a pre-registered study with 161 adult native Dutch speakers with varying levels of literacy, we assessed the contribution of individual differences in written language experience to offline and online syntactic processes. Offline syntactic knowledge was assessed as accuracy in an auditory grammaticality judgment task in which we tested violations of four Dutch grammatical norms. Online syntactic processing was indexed by syntactic priming of the Dutch dative alternation, using a comprehension-to-production priming paradigm with auditory presentation. Controlling for the contribution of non-verbal IQ, verbal working memory, and processing speed, we observed a robust effect of literacy experience on the detection of grammatical norm violations in spoken sentences, suggesting that exposure to the syntactic complexity and diversity of written language has specific benefits for general (modality-independent) syntactic knowledge. We replicated previous results by finding robust comprehension-to-production structural priming, both with and without lexical overlap between prime and target. Although literacy experience affected the usage of syntactic alternates in our large sample, it did not modulate their priming. We conclude that amount of experience with written language increases explicit awareness of grammatical norm violations and changes the usage of (PO vs. DO) dative spoken sentences but has no detectable effect on their implicit syntactic priming in proficient language users. These findings constrain theories about the effect of long-term experience on syntactic processing.
  • Felemban, D., Verdonschot, R. G., Iwamoto, Y., Uchiyama, Y., Kakimoto, N., Kreiborg, S., & Murakami, S. (2018). A quantitative experimental phantom study on MRI image uniformity. Dentomaxillofacial Radiology, 47(6): 20180077. doi:10.1259/dmfr.20180077.

    Abstract

    Objectives: Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA).
    Methods: Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method.
    Results: Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals).
    Conclusions: Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.
  • Felker, E. R., Broersma, M., & Ernestus, M. (2021). The role of corrective feedback and lexical guidance in perceptual learning of a novel L2 accent in dialogue. Applied Psycholinguistics, 42, 1029-1055. doi:10.1017/S0142716421000205.

    Abstract

    Perceptual learning of novel accents is a critical skill for second-language speech perception, but little is known about the mechanisms that facilitate perceptual learning in communicative contexts. To study perceptual learning in an interactive dialogue setting while maintaining experimental control of the phonetic input, we employed an innovative experimental method incorporating prerecorded speech into a naturalistic conversation. Using both computer-based and face-to-face dialogue settings, we investigated the effect of two types of learning mechanisms in interaction: explicit corrective feedback and implicit lexical guidance. Dutch participants played an information-gap game featuring minimal pairs with an accented English speaker whose /ε/ pronunciations were shifted to /ɪ/. Evidence for the vowel shift came either from corrective feedback about participants’ perceptual mistakes or from onscreen lexical information that constrained their interpretation of the interlocutor’s words. Corrective feedback explicitly contrasting the minimal pairs was more effective than generic feedback. Additionally, both receiving lexical guidance and exhibiting more uptake for the vowel shift improved listeners’ subsequent online processing of accented words. Comparable learning effects were found in both the computer-based and face-to-face interactions, showing that our results can be generalized to a more naturalistic learning context than traditional computer-based perception training programs.
  • Felker, E. R. (2021). Learning second language speech perception in natural settings. PhD Thesis, Radboud University, Nijmegen.
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Fernandes, T., Arunkumar, M., & Huettig, F. (2021). The role of the written script in shaping mirror-image discrimination: Evidence from illiterate, Tamil literate, and Tamil-Latin-alphabet bi-literate adults. Cognition, 206: 104493. doi:10.1016/j.cognition.2020.104493.

    Abstract

    Learning a script with mirrored graphs (e.g., d ≠ b) requires overcoming the evolutionary-old perceptual tendency to process mirror images as equivalent. Thus, breaking mirror invariance offers an important tool for understanding cultural re-shaping of evolutionarily ancient cognitive mechanisms. Here we investigated the role of script (i.e., presence vs. absence of mirrored graphs: Latin alphabet vs. Tamil) by revisiting mirror-image processing by illiterate, Tamil monoliterate, and Tamil-Latin-alphabet bi-literate adults. Participants performed two same-different tasks (one orientation-based, another shape-based) on Latin-alphabet letters. Tamil monoliterate were significantly better than illiterate and showed good explicit mirror-image discrimination. However, only bi-literate adults fully broke mirror invariance: slower shape-based judgments for mirrored than identical pairs and reduced disadvantage in orientation-based over shape-based judgments of mirrored pairs. These findings suggest learning a script with mirrored graphs is the strongest force for breaking mirror invariance.

    Additional information

    supplementary material
  • Ferrari, A., & Noppeney, U. (2021). Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biology, 19(11): e3001465. doi:10.1371/journal.pbio.3001465.

    Abstract

    To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.

    Additional information

    supporting information
  • Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pasukonis, A., Hoeschele, M., Ocklenburg, S., De Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284: 20170990. doi:10.1098/rspb.2017.0990.

    Abstract

    Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes—Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
  • Filippi, P., Gogoleva, S. S., Volodina, E. V., Volodin, I. A., & De Boer, B. (2017). Humans identify negative (but not positive) arousal in silver fox vocalizations: Implications for the adaptive value of interspecific eavesdropping. Current Zoology, 63(4), 445-456. doi:10.1093/cz/zox035.

    Abstract

    The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans' ability to identify emotional arousal in silver foxes. Here, we adopted low-and high-arousal calls emitted by three strains of silver fox-Tame, Aggressive, and Unselected-in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans' ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans' absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.

    Additional information

    zox035_Supp.zip
  • Filippi, P., Ocklenburg, S., Bowling, D. L., Heege, L., Güntürkün, O., Newen, A., & de Boer, B. (2017). More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cognition & Emotion, 31(5), 879-891. doi:10.1080/02699931.2016.1177489.

    Abstract

    Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.
  • Filippi, P., Laaha, S., & Fitch, W. T. (2017). Utterance-final position and pitch marking aid word learning in school-age children. Royal Society Open Science, 4: 161035. doi:10.1098/rsos.161035.

    Abstract

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.
  • Fink, B., Bläsing, B., Ravignani, A., & Shackelford, T. K. (2021). Evolution and functions of human dance. Evolution and Human Behavior, 42(4), 351-360. doi:10.1016/j.evolhumbehav.2021.01.003.

    Abstract

    Dance is ubiquitous among humans and has received attention from several disciplines. Ethnographic documentation suggests that dance has a signaling function in social interaction. It can influence mate preferences and facilitate social bonds. Research has provided insights into the proximate mechanisms of dance, individually or when dancing with partners or in groups. Here, we review dance research from an evolutionary perspective. We propose that human dance evolved from ordinary (non-communicative) movements to communicate socially relevant information accurately. The need for accurate social signaling may have accompanied increases in group size and population density. Because of its complexity in production and display, dance may have evolved as a vehicle for expressing social and cultural information. Mating-related qualities and motives may have been the predominant information derived from individual dance movements, whereas group dance offers the opportunity for the exchange of socially relevant content, for coordinating actions among group members, for signaling coalitional strength, and for stabilizing group structures. We conclude that, despite the cultural diversity in dance movements and contexts, the primary communicative functions of dance may be the same across societies.
  • Fisher, N., Hadley, L., Corps, R. E., & Pickering, M. (2021). The effects of dual-task interference in predicting turn-ends in speech and music. Brain Research, 1768: 147571. doi:10.1016/j.brainres.2021.147571.

    Abstract

    Determining when a partner’s spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner’s action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E. (2017). Evolution of language: Lessons from the genome. Psychonomic Bulletin & Review, 24(1), 34-40. doi: 10.3758/s13423-016-1112-8.

    Abstract

    The post-genomic era is an exciting time for researchers interested in the biology of speech and language. Substantive advances in molecular methodologies have opened up entire vistas of investigation that were not previously possible, or in some cases even imagined. Speculations concerning the origins of human cognitive traits are being transformed into empirically addressable questions, generating specific hypotheses that can be explicitly tested using data collected from both the natural world and experimental settings. In this article, I discuss a number of promising lines of research in this area. For example, the field has begun to identify genes implicated in speech and language skills, including not just disorders but also the normal range of abilities. Such genes provide powerful entry points for gaining insights into neural bases and evolutionary origins, using sophisticated experimental tools from molecular neuroscience and developmental neurobiology. At the same time, sequencing of ancient hominin genomes is giving us an unprecedented view of the molecular genetic changes that have occurred during the evolution of our species. Synthesis of data from these complementary sources offers an opportunity to robustly evaluate alternative accounts of language evolution. Of course, this endeavour remains challenging on many fronts, as I also highlight in the article. Nonetheless, such an integrated approach holds great potential for untangling the complexities of the capacities that make us human.
  • Fisher, V. J. (2021). Embodied songs: Insights into the nature of cross-modal meaning-making within sign language informed, embodied interpretations of vocal music. Frontiers in Psychology, 12: 624689. doi:10.3389/fpsyg.2021.624689.

    Abstract

    Embodied song practices involve the transformation of songs from the acoustic modality into an embodied-visual form, to increase meaningful access for d/Deaf audiences. This goes beyond the translation of lyrics, by combining poetic sign language with other bodily movements to embody the para-linguistic expressive and musical features that enhance the message of a song. To date, the limited research into this phenomenon has focussed on linguistic features and interactions with rhythm. The relationship between bodily actions and music has not been probed beyond an assumed implication of conformance. However, as the primary objective is to communicate equivalent meanings, the ways that the acoustic and embodied-visual signals relate to each other should reveal something about underlying conceptual agreement. This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences. These theories are applied to embodiment strategies used by prominent d/Deaf and hearing Dutch practitioners, to unpack the relationship between acoustic songs, their embodied representations, and their broader conceptual and affective meanings. This leads to the proposition that meaning primarily arises through shared patterns of internal relations across a range of amodal and cross-modal features with an emphasis on dynamic qualities. These analogous patterns can inform metaphorical interpretations and trigger shared emotional responses. This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research.
  • Fisher, V. J. (2017). Unfurling the wings of flight: Clarifying ‘the what’ and ‘the why’ of mental imagery use in dance. Research in Dance Education, 18(3), 252-272. doi:10.1080/14647893.2017.1369508.

    Abstract

    This article provides clarification regarding ‘the what’ and ‘the why’ of mental imagery use in dance. It proposes that mental images are invoked across sensory modalities and often combine internal and external perspectives. The content of images ranges from ‘direct’ body oriented simulations along a continuum employing analogous mapping through ‘semi-direct’ literal similarities to abstract metaphors. The reasons for employing imagery are diverse and often overlapping, affecting physical, affective (psychological) and cognitive domains. This paper argues that when dance uses imagery, it is mapping aspects of the world to the body via analogy. Such mapping informs and changes our understanding of both our bodies and the world. In this way, mental imagery use in dance is fundamentally a process of embodied cognition
  • Fitz, H., & Chang, F. (2017). Meaningful questions: The acquisition of auxiliary inversion in a connectionist model of sentence production. Cognition, 166, 225-250. doi:10.1016/j.cognition.2017.05.008.

    Abstract

    Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions
  • FitzPatrick, I., & Weber, K. (2008). “Il piccolo principe est allé”: Processing of language switches in auditory sentence comprehension. Journal of Neuroscience, 28(18), 4581-4582. doi:10.1523/JNEUROSCI.0905-08.2008.
  • Floyd, S., San Roque, L., & Majid, A. (2018). Smell is coded in grammar and frequent in discourse: Cha'palaa olfactory language in cross-linguistic perspective. Journal of Linguistic Anthropology, 28(2), 175-196. doi:10.1111/jola.12190.

    Abstract

    It has long been claimed that there is no lexical field of smell, and that smell is of too little validity to be expressed in grammar. We demonstrate both claims are false. The Cha'palaa language (Ecuador) has at least 15 abstract smell terms, each of which is formed using a type of classifier previously thought not to exist. Moreover, using conversational corpora we show that Cha'palaa speakers also talk about smell more than Imbabura Quechua and English speakers. Together, this shows how language and social interaction may jointly reflect distinct cultural orientations towards sensory experience in general and olfaction in particular.
  • Floyd, S. (2008). The Pirate media economy and the emergence of Quichua language media spaces in Ecuador. Anthropology of Work Review, 29(2), 34-41. doi:10.1111/j.1548-1417.2008.00012.x.

    Abstract

    This paper gives an account of the pirate media economy of Ecuador and its role in the emergence of indigenous Quichua-language media spaces, identifying the different parties involved in this economy, discussing their relationship to the parallel ‘‘legitimate’’ media economy, and considering the implications of this informal media market for Quichua linguistic and cultural reproduction. As digital recording and playback technology has become increasingly more affordable and widespread over recent years, black markets have grown up worldwide, based on cheap ‘‘illegal’’ reproduction of commercial media, today sold by informal entrepreneurs in rural markets, shops and street corners around Ecuador. Piggybacking on this pirate infrastructure, Quichua-speaking media producers and consumers have begun to circulate indigenous-language video at an unprecedented rate, helped by small-scale merchants who themselves profit by supplying market demands for positive images of indigenous people. In a context of a national media that has tended to silence indigenous voices rather than amplify them, informal media producers, consumers and vendors are developing relationships that open meaningful media spaces within the particular social, economic and linguistic contexts of Ecuador.
  • Floyd, S., Rossi, G., Baranova, J., Blythe, J., Dingemanse, M., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2018). Universals and cultural diversity in the expression of gratitude. Royal Society Open Science, 5: 180391. doi:10.1098/rsos.180391.

    Abstract

    Gratitude is argued to have evolved to motivate and maintain social reciprocity among people, and to be linked to a wide range of positive effects — social, psychological, and even physical. But is socially reciprocal behaviour dependent on the expression of gratitude, for example by saying "thank you" as in English? Current research has not included cross-cultural elements, and has tended to conflate gratitude as an emotion with gratitude as a linguistic practice, as might appear to be the case in English. Here we ask to what extent people actually express gratitude in different societies by focussing on episodes of everyday life where someone obtains a good, service, or support from another, and comparing these episodes across eight languages from five continents. What we find is that expressions of gratitude in these episodes are remarkably rare, suggesting that social reciprocity in everyday life relies on tacit understandings of people’s rights and duties surrounding mutual assistance and collaboration. At the same time, we also find minor cross-cultural variation, with slightly higher rates in Western European languages English and Italian, showing that universal tendencies of social reciprocity should not be conflated with more culturally variable practices of expressing gratitude. Our study complements previous experimental and culture-specific research on social reciprocity with a systematic comparison of audiovisual corpora of naturally occurring social interaction from different cultures from around the world.
  • Folia, V., Uddén, J., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2008). Implicit learning and dyslexia. Annals of the New York Academy of Sciences, 1145, 132-150. doi:10.1196/annals.1416.012.

    Abstract

    Several studies have reported an association between dyslexia and implicit learning deficits. It has been suggested that the weakness in implicit learning observed in dyslexic individuals may be related to sequential processing and implicit sequence learning. In the present article, we review the current literature on implicit learning and dyslexia. We describe a novel, forced-choice structural "mere exposure" artificial grammar learning paradigm and characterize this paradigm in normal readers in relation to the standard grammaticality classification paradigm. We argue that preference classification is a more optimal measure of the outcome of implicit acquisition since in the preference version participants are kept completely unaware of the underlying generative mechanism, while in the grammaticality version, the subjects have, at least in principle, been informed about the existence of an underlying complex set of rules at the point of classification (but not during acquisition). On the basis of the "mere exposure effect," we tested the prediction that the development of preference will correlate with the grammaticality status of the classification items. In addition, we examined the effects of grammaticality (grammatical/nongrammatical) and associative chunk strength (ACS; high/low) on the classification tasks (preference/grammaticality). Using a balanced ACS design in which the factors of grammaticality (grammatical/nongrammatical) and ACS (high/low) were independently controlled in a 2 × 2 factorial design, we confirmed our predictions. We discuss the suitability of this task for further investigation of the implicit learning characteristics in dyslexia.
  • Forkel, S. J., & Catani, M. (2018). Lesion mapping in acute stroke aphasia and its implications for recovery. Neuropsychologia, 115, 88-100. doi:10.1016/j.neuropsychologia.2018.03.036.

    Abstract

    Patients with stroke offer a unique window into understanding human brain function. Mapping stroke lesions poses several challenges due to the complexity of the lesion anatomy and the mechanisms causing local and remote disruption on brain networks. In this prospective longitudinal study, we compare standard and advanced approaches to white matter lesion mapping applied to acute stroke patients with aphasia. Eighteen patients with acute left hemisphere stroke were recruited and scanned within two weeks from symptom onset. Aphasia assessment was performed at baseline and six-month follow-up. Structural and diffusion MRI contrasts indicated an area of maximum overlap in the anterior external/extreme capsule with diffusion images showing a larger overlap extending into posterior perisylvian regions. Anatomical predictors of recovery included damage to ipsilesional tracts (as shown by both structural and diffusion images) and contralesional tracts (as shown by diffusion images only). These findings indicate converging results from structural and diffusion lesion mapping methods but also clear differences between the two approaches in their ability to identify predictors of recovery outside the lesioned regions.
  • Forkstam, C., Elwér, A., Ingvar, M., & Petersson, K. M. (2008). Instruction effects in implicit artificial grammar learning: A preference for grammaticality. Brain Research, 1221, 80-92. doi:10.1016/j.brainres.2008.05.005.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a paradigm that has been proposed as a simple model for aspects of natural language acquisition. In the present study we compared the typical yes–no grammaticality classification, with yes–no preference classification. In the case of preference instruction no reference to the underlying generative mechanism (i.e., grammar) is needed and the subjects are therefore completely uninformed about an underlying structure in the acquisition material. In experiment 1, subjects engaged in a short-term memory task using only grammatical strings without performance feedback for 5 days. As a result of the 5 acquisition days, classification performance was independent of instruction type and both the preference and the grammaticality group acquired relevant knowledge of the underlying generative mechanism to a similar degree. Changing the grammatical stings to random strings in the acquisition material (experiment 2) resulted in classification being driven by local substring familiarity. Contrasting repeated vs. non-repeated preference classification (experiment 3) showed that the effect of local substring familiarity decreases with repeated classification. This was not the case for repeated grammaticality classifications. We conclude that classification performance is largely independent of instruction type and that forced-choice preference classification is equivalent to the typical grammaticality classification.
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.

    Additional information

    supplementary information
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's social categorization (out-group member) affect memory and the credibility of the message (e.g., less trustworthiness). In the present study, we go one step further and ask whether evaluations of messages are also affected by regional accents—accents from a different region than the listener. In the current study, we report results from three experiments on immediate memory recognition and immediate credibility assessments as well as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same region as the participant—and regional accents—from native speakers of a different country than the participants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by accent does not seem to negatively affect how we treat the speakers' messages.
  • Frances, C., Navarra-Barindelli, E., & Martin, C. D. (2021). Inhibitory and facilitatory effects of phonological and orthographic similarity on L2 word recognition across modalities in bilinguals. Scientific Reports, 11: 12812. doi:10.1038/s41598-021-92259-z.

    Abstract

    Language perception studies on bilinguals often show that words that share form and meaning across
    languages (cognates) are easier to process than words that share only meaning. This facilitatory
    phenomenon is known as the cognate effect. Most previous studies have shown this effect visually,
    whereas the auditory modality as well as the interplay between type of similarity and modality
    remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out
    a lexical decision task in their second language, both visually and auditorily. Words had high or low
    phonological and orthographic similarity, fully crossed. We also included orthographically identical
    words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic
    similarity in the visual modality and phonological similarity in the auditory modality) leads to
    improved signal detection, whereas similarity across modalities hinders it. We provide support for
    the idea that perfect cognates are a special category within cognates. Results suggest a need for a
    conceptual and practical separation between types of similarity in cognate studies. The theoretical
    implication is that the representations of items are active in both modalities of the non‑target
    language during language processing, which needs to be incorporated to our current processing
    models.
  • Frances, C. (2021). Semantic richness, semantic context, and language learning. PhD Thesis, Universidad del País Vasco-Euskal Herriko Unibertsitatea, Donostia.

    Abstract

    As knowing a foreign language becomes a necessity in the modern world, a large portion of
    the population is faced with the challenge of learning a language in a classroom. This, in turn,
    presents a unique set of difficulties. Acquiring a language with limited and artificial exposure makes
    learning new information and vocabulary particularly difficult. The purpose of this thesis is to help us
    understand how we can compensate—at least partially—for these difficulties by presenting
    information in a way that aids learning. In particular, I focused on variables that affect semantic
    richness—meaning the amount and variability of information associated with a word. Some factors
    that affect semantic richness are intrinsic to the word and others pertain to that word’s relationship
    with other items and information. This latter group depends on the context around the to-be-
    learned items rather than the words themselves. These variables are easier to manipulate than
    intrinsic qualities, making them more accessible tools for teaching and understanding learning. I
    focused on two factors: emotionality of the surrounding semantic context and contextual diversity.
    Publication 1 (Frances, de Bruin, et al., 2020b) focused on content learning in a foreign
    language and whether the emotionality—positive or neutral—of the semantic context surrounding
    key information aided its learning. This built on prior research that showed a reduction in
    emotionality in a foreign language. Participants were taught information embedded in either
    positive or neutral semantic contexts in either their native or foreign language. When they were
    then tested on these embedded facts, participants’ performance decreased in the foreign language.
    But, more importantly, they remembered better the information from the positive than the neutral
    semantic contexts.
    In Publication 2 (Frances, de Bruin, et al., 2020a), I focused on how emotionality affected
    vocabulary learning. I taught participants the names of novel items described either in positive or
    neutral terms in either their native or foreign language. Participants were then asked to recall and
    recognize the object's name—when cued with its image. The effects of language varied with the
    difficulty of the task—appearing in recall but not recognition tasks. Most importantly, learning the
    words in a positive context improved learning, particularly of the association between the image of
    the object and its name.
    In Publication 3 (Frances, Martin, et al., 2020), I explored the effects of contextual
    diversity—namely, the number of texts a word appears in—on native and foreign language word
    learning. Participants read several texts that had novel pseudowords. The total number of
    encounters with the novel words was held constant, but they appeared in 1, 2, 4, or 8 texts in either
    their native or foreign language. Increasing contextual diversity—i.e., the number of texts a word
    appeared in—improved recall and recognition, as well as the ability to match the word with its
    meaning. Using a foreign language only affected performance when participants had to quickly
    identify the meaning of the word.
    Overall, I found that the tested contextual factors related to semantic richness—i.e.,
    emotionality of the semantic context and contextual diversity—can be manipulated to improve
    learning in a foreign language. Using positive emotionality not only improved learning in the foreign
    language, but it did so to the same extent as in the native language. On a theoretical level, this
    suggests that the reduction in emotionality in a foreign language is not ubiquitous and might relate
    to the way in which that language as learned.
    The third article shows an experimental manipulation of contextual diversity and how this
    can affect learning of a lexical item, even if the amount of information known about the item is kept
    constant. As in the case of emotionality, the effects of contextual diversity were also the same
    between languages. Although deducing words from context is dependent on vocabulary size, this
    does not seem to hinder the benefits of contextual diversity in the foreign language.
    Finally, as a whole, the articles contained in this compendium provide evidence that some
    aspects of semantic richness can be manipulated contextually to improve learning and memory. In
    addition, the effects of these factors seem to be independent of language status—meaning, native
    or foreign—when learning new content. This suggests that learning in a foreign and a native
    language is not as different as I initially hypothesized, allowing us to take advantage of native
    language learning tools in the foreign language, as well.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the
    messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's
    social categorization (out-group member) affect memory and the credibility of the message (e.g., less trust-
    worthiness). In the present study, we go one step further and ask whether evaluations of messages are also
    affected by regional accents—accents from a different region than the listener. In the current study, we report
    results from three experiments on immediate memory recognition and immediate credibility assessments as well
    as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same
    region as the participant—and regional accents—from native speakers of a different country than the partici-
    pants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by
    accent does not seem to negatively affect how we treat the speakers' messages.
  • Francisco, A. A., Groen, M. A., Jesse, A., & McQueen, J. M. (2017). Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences, 54, 60-72. doi:10.1016/j.lindif.2017.01.003.

    Abstract

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a diagnosis of dyslexia, we showed that—across all participants, and after accounting for variance in cognitive abilities—audiovisual temporal sensitivity contributed uniquely to variance in reading errors. This is consistent with previous studies demonstrating an audiovisual deficit in dyslexia. Additionally, we showed that speechreading (identification of speech based on visual cues from the talking face alone) was a unique contributor to variance in phonological awareness in dyslexic readers only: those who scored higher on speechreading, scored lower on phonological awareness. This suggests a greater reliance on visual speech as a compensatory mechanism when processing auditory speech is problematic. A secondary aim of this study was to better understand the nature of dyslexia. The finding that a sub-group of dyslexic readers scored low on phonological awareness and high on speechreading is consistent with a hybrid perspective of dyslexia: There are multiple possible pathways to reading impairment, which may translate into multiple profiles of dyslexia.
  • Francisco, A. A., Jesse, A., Groen, M. A., & McQueen, J. M. (2017). A general audiovisual temporal processing deficit in adult readers with dyslexia. Journal of Speech, Language, and Hearing Research, 60, 144-158. doi:10.1044/2016_JSLHR-H-15-0375.

    Abstract

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results: Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. Conclusions: The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.
  • Francks, C., Paracchini, S., Smith, S. D., Richardson, A. J., Scerri, T. S., Cardon, L. R., Marlow, A. J., MacPhie, I. L., Walter, J., Pennington, B. F., Fisher, S. E., Olson, R. K., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2004). A 77-kilobase region of chromosome 6p22.2 is associated with dyslexia in families from the United Kingdom and from the United States. American Journal of Human Genetics, 75(6), 1046-1058. doi:10.1086/426404.

    Abstract

    Several quantitative trait loci (QTLs) that influence developmental dyslexia (reading disability [RD]) have been mapped to chromosome regions by linkage analysis. The most consistently replicated area of linkage is on chromosome 6p23-21.3. We used association analysis in 223 siblings from the United Kingdom to identify an underlying QTL on 6p22.2. Our association study implicates a 77-kb region spanning the gene TTRAP and the first four exons of the neighboring uncharacterized gene KIAA0319. The region of association is also directly upstream of a third gene, THEM2. We found evidence of these associations in a second sample of siblings from the United Kingdom, as well as in an independent sample of twin-based sibships from Colorado. One main RD risk haplotype that has a frequency of ∼12% was found in both the U.K. and U.S. samples. The haplotype is not distinguished by any protein-coding polymorphisms, and, therefore, the functional variation may relate to gene expression. The QTL influences a broad range of reading-related cognitive abilities but has no significant impact on general cognitive performance in these samples. In addition, the QTL effect may be largely limited to the severe range of reading disability.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2008). World knowledge in computational models of discourse comprehension. Discourse Processes, 45(6), 429-463. doi:10.1080/01638530802069926.

    Abstract

    Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, ad hoc selection, extraction from text corpora, and implementation of all knowledge about a simplified microworld. Each of these strategies is illustrated by a detailed discussion of a model of discourse comprehension. It is argued that seemingly successful modeling results are uninformative if knowledge is implemented ad hoc or not at all, that knowledge extracted from large text corpora is not appropriate for discourse comprehension, and that a suitable implementation can be obtained by applying the microworld strategy.
  • Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22(4), 421-435. doi:10.1111/infa.12182.

    Abstract

    The ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research—especially with infant participants—also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.
  • Frank, S. L. (2004). Computational modeling of discourse comprehension. PhD Thesis, Tilburg University, Tilburg.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Frank, S. L., & Willems, R. M. (2017). Word predictability and semantic similarity show distinct patterns of brain activity during language comprehension. Language, Cognition and Neuroscience, 32(9), 1192-1203. doi:10.1080/23273798.2017.1323109.

    Abstract

    We investigate the effects of two types of relationship between the words of a sentence or text – predictability and semantic similarity – by reanalysing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data from studies in which participants comprehend naturalistic stimuli. Each content word's predictability given previous words is quantified by a probabilistic language model, and semantic similarity to previous words is quantified by a distributional semantics model. Brain activity time-locked to each word is regressed on the two model-derived measures. Results show that predictability and semantic similarity have near identical N400 effects but are dissociated in the fMRI data, with word predictability related to activity in, among others, the visual word-form area, and semantic similarity related to activity in areas associated with the semantic network. This indicates that both predictability and similarity play a role during natural language comprehension and modulate distinct cortical regions.
  • Franke, B., Hoogman, M., Vasquez, A. A., Heister, J., Savelkoul, P., Naber, M., Scheffer, H., Kiemeney, L., Kan, C., Kooij, J., & Buitelaar, J. (2008). Association of the dopamine transporter (SLC6A3/DAT1) gene 9-6 haplotype with adult ADHD. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147, 1576-1579. doi:10.1002/ajmg.b.30861.

    Abstract

    ADHD is a neuropsychiatric disorder characterized by chronic hyperactivity, inattention and impulsivity, which affects about 5% of school-age children. ADHD persists into adulthood in at least 15% of cases. It is highly heritable and familial influences seem strongest for ADHD persisting into adulthood. However, most of the genetic research in ADHD has been carried out in children with the disorder. The gene that has received most attention in ADHD genetics is SLC6A3/DAT1 encoding the dopamine transporter. In the current study we attempted to replicate in adults with ADHD the reported association of a 10–6 SLC6A3-haplotype, formed by the 10-repeat allele of the variable number of tandem repeat (VNTR) polymorphism in the 3′ untranslated region of the gene and the 6-repeat allele of the VNTR in intron 8 of the gene, with childhood ADHD. In addition, we wished to explore the role of a recently described VNTR in intron 3 of the gene. Two hundred sixteen patients and 528 controls were included in the study. We found a 9–6 SLC6A3-haplotype, rather than the 10–6 haplotype, to be associated with ADHD in adults. The intron 3 VNTR showed no association with adult ADHD. Our findings converge with earlier reports and suggest that age is an important factor to be taken into account when assessing the association of SLC6A3 with ADHD. If confirmed in other studies, the differential association of the gene with ADHD in children and in adults might imply that SLC6A3 plays a role in modulating the ADHD phenotype, rather than causing it
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Franken, M. K. (2018). Listening for speaking: Investigations of the relationship between speech perception and production. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    Speaking and listening are complex tasks that we perform on a daily basis, almost without conscious effort. Interestingly, speaking almost never occurs without listening: whenever we speak, we at least hear our own speech. The research in this thesis is concerned with how the perception of our own speech influences our speaking behavior. We show that unconsciously, we actively monitor this auditory feedback of our own speech. This way, we can efficiently take action and adapt articulation when an error occurs and auditory feedback does not correspond to our expectation. Processing the auditory feedback of our speech does not, however, automatically affect speech production. It is subject to a number of constraints. For example, we do not just track auditory feedback, but also its consistency. If auditory feedback is more consistent over time, it has a stronger influence on speech production. In addition, we investigated how auditory feedback during speech is processed in the brain, using magnetoencephalography (MEG). The results suggest the involvement of a broad cortical network including both auditory and motor-related regions. This is consistent with the view that the auditory center of the brain is involved in comparing auditory feedback to our expectation of auditory feedback. If this comparison yields a mismatch, motor-related regions of the brain can be recruited to alter the ongoing articulations.

    Additional information

    full text via Radboud Repository
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • Friederici, A. D., & Levelt, W. J. M. (1986). Cognitive processes of spatial coordinate assignment: On weighting perceptual cues. Naturwissenschaften, 73, 455-458.
  • Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S. Friedrich, P., Forkel, S. J., Amiez, C., Balsters, J. H., Coulon, O., Fan, L., Goulas, A., Hadj-Bouziane, F., Hecht, E. E., Heuer, K., Jiang, T., Latzman, R. D., Liu, X., Loh, K. K., Patil, K. R., Lopez-Persem, A., Procyk, E., Sallet, J., Toro, R., Vickery, S., Weis, S., Wilson, C., Xu, T., Zerbi, V., Eickoff, S. B., Margulies, D., Mars, R., & Thiebaut de Schotten, M. (2021). Imaging evolution of the primate brain: The next frontier? NeuroImage, 228: 117685. doi:10.1016/j.neuroimage.2020.117685.

    Abstract

    Evolution, as we currently understand it, strikes a delicate balance between animals' ancestral history and adaptations to their current niche. Similarities between species are generally considered inherited from a common ancestor whereas observed differences are considered as more recent evolution. Hence comparing species can provide insights into the evolutionary history. Comparative neuroimaging has recently emerged as a novel subdiscipline, which uses magnetic resonance imaging (MRI) to identify similarities and differences in brain structure and function across species. Whereas invasive histological and molecular techniques are superior in spatial resolution, they are laborious, post-mortem, and oftentimes limited to specific species. Neuroimaging, by comparison, has the advantages of being applicable across species and allows for fast, whole-brain, repeatable, and multi-modal measurements of the structure and function in living brains and post-mortem tissue. In this review, we summarise the current state of the art in comparative anatomy and function of the brain and gather together the main scientific questions to be explored in the future of the fascinating new field of brain evolution derived from comparative neuroimaging.
  • Frost, R. L. A., Monaghan, P., & Tatsumi, T. (2017). Domain-general mechanisms for speech segmentation: The role of duration information in language learning. Journal of Experimental Psychology: Human Perception and Performance, 43(3), 466-476. doi:10.1037/xhp0000325.

    Abstract

    Speech segmentation is supported by multiple sources of information that may either inform language processing specifically, or serve learning more broadly. The Iambic/Trochaic Law (ITL), where increased duration indicates the end of a group and increased emphasis indicates the beginning of a group, has been proposed as a domain-general mechanism that also applies to language. However, language background has been suggested to modulate use of the ITL, meaning that these perceptual grouping preferences may instead be a consequence of language exposure. To distinguish between these accounts, we exposed native-English and native-Japanese listeners to sequences of speech (Experiment 1) and nonspeech stimuli (Experiment 2), and examined segmentation using a 2AFC task. Duration was manipulated over 3 conditions: sequences contained either an initial-item duration increase, or a final-item duration increase, or items of uniform duration. In Experiment 1, language background did not affect the use of duration as a cue for segmenting speech in a structured artificial language. In Experiment 2, the same results were found for grouping structured sequences of visual shapes. The results are consistent with proposals that duration information draws upon a domain-general mechanism that can apply to the special case of language acquisition
  • Frost, R. L. A., & Monaghan, P. (2017). Sleep-driven computations in speech processing. PLoS One, 12(1): e0169538. doi:10.1371/journal.pone.0169538.

    Abstract

    Acquiring language requires segmenting speech into individual words, and abstracting over those words to discover grammatical structure. However, these tasks can be conflicting—on the one hand requiring memorisation of precise sequences that occur in speech, and on the other requiring a flexible reconstruction of these sequences to determine the grammar. Here, we examine whether speech segmentation and generalisation of grammar can occur simultaneously—with the conflicting requirements for these tasks being over-come by sleep-related consolidation. After exposure to an artificial language comprising words containing non-adjacent dependencies, participants underwent periods of consolidation involving either sleep or wake. Participants who slept before testing demonstrated a sustained boost to word learning and a short-term improvement to grammatical generalisation of the non-adjacencies, with improvements after sleep outweighing gains seen after an equal period of wake. Thus, we propose that sleep may facilitate processing for these conflicting tasks in language acquisition, but with enhanced benefits for speech segmentation.

    Additional information

    Data available
  • Gaby, A. R. (2004). Extended functions of Thaayorre body part terms. Papers in Linguistics and Applied Linguistics, 4(2), 24-34.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Brain error-monitoring activity is affected by semantic relatedness: An event-related brain potentials study. Journal of Cognitive Neuroscience, 20(5), 927-940. doi:10.1162/jocn.2008.20514.

    Abstract

    Speakers continuously monitor what they say. Sometimes, self-monitoring malfunctions and errors pass undetected and uncorrected. In the field of action monitoring, an event-related brain potential, the error-related negativity (ERN), is associated with error processing. The present study relates the ERN to verbal self-monitoring and investigates how the ERN is affected by auditory distractors during verbal monitoring. We found that the ERN was largest following errors that occurred after semantically related distractors had been presented, as compared to semantically unrelated ones. This result demonstrates that the ERN is sensitive not only to response conflict resulting from the incompatibility of motor responses but also to more abstract lexical retrieval conflict resulting from activation of multiple lexical entries. This, in turn, suggests that the functioning of the verbal self-monitoring system during speaking is comparable to other performance monitoring, such as action monitoring.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Motivation and semantic context affect brain error-monitoring activity: An event-related brain potentials study. NeuroImage, 39, 395-405. doi:10.1016/j.neuroimage.2007.09.001.

    Abstract

    During speech production, we continuously monitor what we say. In
    situations in which speech errors potentially have more severe
    consequences, e.g. during a public presentation, our verbal selfmonitoring
    system may pay special attention to prevent errors than in
    situations in which speech errors are more acceptable, such as a casual
    conversation. In an event-related potential study, we investigated
    whether or not motivation affected participants’ performance using a
    picture naming task in a semantic blocking paradigm. Semantic
    context of to-be-named pictures was manipulated; blocks were
    semantically related (e.g., cat, dog, horse, etc.) or semantically
    unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated
    independently by monetary reward. The motivation manipulation did
    not affect error rate during picture naming. However, the highmotivation
    condition yielded increased amplitude and latency values of
    the error-related negativity (ERN) compared to the low-motivation
    condition, presumably indicating higher monitoring activity. Furthermore,
    participants showed semantic interference effects in reaction
    times and error rates. The ERN amplitude was also larger during
    semantically related than unrelated blocks, presumably indicating that
    semantic relatedness induces more conflict between possible verbal
    responses.
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Garcia, R., Dery, J. E., Roeser, J., & Höhle, B. (2018). Word order preferences of Tagalog-speaking adults and children. First Language, 38(6), 617-640. doi:10.1177/0142723718790317.

    Abstract

    This article investigates the word order preferences of Tagalog-speaking adults and five- and seven-year-old children. The participants were asked to complete sentences to describe pictures depicting actions between two animate entities. Adults preferred agent-initial constructions in the patient voice but not in the agent voice, while the children produced mainly agent-initial constructions regardless of voice. This agent-initial preference, despite the lack of a close link between the agent and the subject in Tagalog, shows that this word order preference is not merely syntactically-driven (subject-initial preference). Additionally, the children’s agent-initial preference in the agent voice, contrary to the adults’ lack of preference, shows that children do not respect the subject-last principle of ordering Tagalog full noun phrases. These results suggest that language-specific optional features like a subject-last principle take longer to be acquired.
  • Garcia, R., Garrido Rodriguez, G., & Kidd, E. (2021). Developmental effects in the online use of morphosyntactic cues in sentence processing: Evidence from Tagalog. Cognition, 216: 104859. doi:10.1016/j.cognition.2021.104859.

    Abstract

    Children must necessarily process their input in order to learn it, yet the architecture of the developing parsing system and how it interfaces with acquisition is unclear. In the current paper we report experimental and corpus data investigating adult and children's use of morphosyntactic cues for making incremental online predictions of thematic roles in Tagalog, a verb-initial symmetrical voice language of the Philippines. In Study 1, Tagalog-speaking adults completed a visual world eye-tracking experiment in which they viewed pictures of causative actions that were described by transitive sentences manipulated for voice and word order. The pattern of results showed that adults process agent and patient voice differently, predicting the upcoming noun in the patient voice but not in the agent voice, consistent with the observation of a patient voice preference in adult sentence production. In Study 2, our analysis of a corpus of child-directed speech showed that children heard more patient voice- than agent voice-marked verbs. In Study 3, 5-, 7-, and 9-year-old children completed a similar eye-tracking task as used in Study 1. The overall pattern of results suggested that, like the adults in Study 1, children process agent and patient voice differently in a manner that reflects the input distributions, with children developing towards the adult state across early childhood. The results are most consistent with theoretical accounts that identify a key role for input distributions in acquisition and language processing

    Additional information

    1-s2.0-S001002772100278X-mmc1.docx
  • Gaspard III, J. C., Bauer, G. B., Mann, D. A., Boerner, K., Denum, L., Frances, C., & Reep, R. L. (2017). Detection of hydrodynamic stimuli by the postcranial body of Florida manatees (Trichechus manatus latirostris) A Neuroethology, sensory, neural, and behavioral physiology. Journal of Comparative Physiology, 203, 111-120. doi:10.1007/s00359-016-1142-8.

    Abstract

    Manatees live in shallow, frequently turbid
    waters. The sensory means by which they navigate in these
    conditions are unknown. Poor visual acuity, lack of echo-
    location, and modest chemosensation suggest that other
    modalities play an important role. Rich innervation of sen-
    sory hairs that cover the entire body and enlarged soma-
    tosensory areas of the brain suggest that tactile senses are
    good candidates. Previous tests of detection of underwater
    vibratory stimuli indicated that they use passive movement
    of the hairs to detect particle displacements in the vicinity
    of a micron or less for frequencies from 10 to 150 Hz. In
    the current study, hydrodynamic stimuli were created by
    a sinusoidally oscillating sphere that generated a dipole
    field at frequencies from 5 to 150 Hz. Go/no-go tests of
    manatee postcranial mechanoreception of hydrodynamic
    stimuli indicated excellent sensitivity but about an order of
    magnitude less than the facial region. When the vibrissae
    were trimmed, detection thresholds were elevated, suggest-
    ing that the vibrissae were an important means by which
    detection occurred. Manatees were also highly accurate in two-choice directional discrimination: greater than 90%
    correct at all frequencies tested. We hypothesize that mana-
    tees utilize vibrissae as a three-dimensional array to detect
    and localize low-frequency hydrodynamic stimuli
  • Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P. Gau, R., Noble, S., Heuer, K., Bottenhorn, K. L., Bilgin, I. P., Yang, Y.-F., Huntenburg, J. M., Bayer, J. M., Bethlehem, R. A., Rhoads, S. A., Vogelbacher, C., Borghesani, V., Levitis, E., Wang, H.-T., Van Den Bossche, S., Kobeleva, X., Legarreta, J. H., Guay, S., Atay, S. M., Varoquaux, G. P., Huijser, D. C., Sandström, M. S., Herholz, P., Nastase, S. A., Badhwar, A., Dumas, G., Schwab, S., Moia, S., Dayan, M., Bassil, Y., Brooks, P. P., Mancini, M., Shine, J. M., O’Connor, D., Xie, X., Poggiali, D., Friedrich, P., Heinsfeld, A. S., Riedl, L., Toro, R., Caballero-Gaudes, C., Eklund, A., Garner, K. G., Nolan, C. R., Demeter, D. V., Barrios, F. A., Merchant, J. S., McDevitt, E. A., Oostenveld, R., Craddock, R. C., Rokem, A., Doyle, A., Ghosh, S. S., Nikolaidis, A., Stanley, O. W., Uruñuela, E., Anousheh, N., Arnatkeviciute, A., Auzias, G., Bachar, D., Bannier, E., Basanisi, R., Basavaraj, A., Bedini, M., Bellec, P., Benn, R. A., Berluti, K., Bollmann, S., Bollmann, S., Bradley, C., Brown, J., Buchweitz, A., Callahan, P., Chan, M. Y., Chandio, B. Q., Cheng, T., Chopra, S., Chung, A. W., Close, T. G., Combrisson, E., Cona, G., Constable, R. T., Cury, C., Dadi, K., Damasceno, P. F., Das, S., De Vico Fallani, F., DeStasio, K., Dickie, E. W., Dorfschmidt, L., Duff, E. P., DuPre, E., Dziura, S., Esper, N. B., Esteban, O., Fadnavis, S., Flandin, G., Flannery, J. E., Flournoy, J., Forkel, S. J., Franco, A. R., Ganesan, S., Gao, S., García Alanis, J. C., Garyfallidis, E., Glatard, T., Glerean, E., Gonzalez-Castillo, J., Gould van Praag, C. D., Greene, A. S., Gupta, G., Hahn, C. A., Halchenko, Y. O., Handwerker, D., Hartmann, T. S., Hayot-Sasson, V., Heunis, S., Hoffstaedter, F., Hohmann, D. M., Horien, C., Ioanas, H.-I., Iordan, A., Jiang, C., Joseph, M., Kai, J., Karakuzu, A., Kennedy, D. N., Keshavan, A., Khan, A. R., Kiar, G., Klink, P. C., Koppelmans, V., Koudoro, S., Laird, A. R., Langs, G., Laws, M., Licandro, R., Liew, S.-L., Lipic, T., Litinas, K., Lurie, D. J., Lussier, D., Madan, C. R., Mais, L.-T., Mansour L, S., Manzano-Patron, J., Maoutsa, D., Marcon, M., Margulies, D. S., Marinato, G., Marinazzo, D., Markiewicz, C. J., Maumet, C., Meneguzzi, F., Meunier, D., Milham, M. P., Mills, K. L., Momi, D., Moreau, C. A., Motala, A., Moxon-Emre, I., Nichols, T. E., Nielson, D. M., Nilsonne, G., Novello, L., O’Brien, C., Olafson, E., Oliver, L. D., Onofrey, J. A., Orchard, E. R., Oudyk, K., Park, P. J., Parsapoor, M., Pasquini, L., Peltier, S., Pernet, C. R., Pienaar, R., Pinheiro-Chagas, P., Poline, J.-B., Qiu, A., Quendera, T., Rice, L. C., Rocha-Hidalgo, J., Rutherford, S., Scharinger, M., Scheinost, D., Shariq, D., Shaw, T. B., Siless, V., Simmonite, M., Sirmpilatze, N., Spence, H., Sprenger, J., Stajduhar, A., Szinte, M., Takerkart, S., Tam, A., Tejavibulya, L., Thiebaut de Schotten, M., Thome, I., Tomaz da Silva, L., Traut, N., Uddin, L. Q., Vallesi, A., VanMeter, J. W., Vijayakumar, N., di Oleggio Castello, M. V., Vohryzek, J., Vukojević, J., Whitaker, K. J., Whitmore, L., Wideman, S., Witt, S. T., Xie, H., Xu, T., Yan, C.-G., Yeh, F.-C., Yeo, B. T., & Zuo, X.-N. (2021). Brainhack: Developing a culture of open, inclusive, community-driven neuroscience. Neuron, 109(11), 1769-1775. doi:10.1016/j.neuron.2021.04.001.

    Abstract

    Social factors play a crucial role in the advancement of science. New findings are discussed and theories emerge through social interactions, which usually take place within local research groups and at academic events such as conferences, seminars, or workshops. This system tends to amplify the voices of a select subset of the community—especially more established researchers—thus limiting opportunities for the larger community to contribute and connect. Brainhack (https://brainhack.org/) events (or Brainhacks for short) complement these formats in neuroscience with decentralized 2- to 5-day gatherings, in which participants from diverse backgrounds and career stages collaborate and learn from each other in an informal setting. The Brainhack format was introduced in a previous publication (Cameron Craddock et al., 2016; Figures 1A and 1B). It is inspired by the hackathon model (see glossary in Table 1), which originated in software development and has gained traction in science as a way to bring people together for collaborative work and educational courses. Unlike many hackathons, Brainhacks welcome participants from all disciplines and with any level of experience—from those who have never written a line of code to software developers and expert neuroscientists. Brainhacks additionally replace the sometimes-competitive context of traditional hackathons with a purely collaborative one and also feature informal dissemination of ongoing research through unconferences.

    Additional information

    supplementary information
  • Geipel, I., Lattenkamp, E. Z., Dixon, M. M., Wiegrebe, L., & Page, R. A. (2021). Hearing sensitivity: An underlying mechanism for niche differentiation in gleaning bats. Proceedings of the National Academy of Sciences of the United States of America, 118: e2024943118. doi:10.1073/pnas.2024943118.

    Abstract

    Tropical ecosystems are known for high species diversity. Adaptations permitting niche differentiation enable species to coexist. Historically, research focused primarily on morphological and behavioral adaptations for foraging, roosting, and other basic ecological factors. Another important factor, however, is differences in sensory capabilities. So far, studies mainly have focused on the output of behavioral strategies of predators and their prey preference. Understanding the coexistence of different foraging strategies, however, requires understanding underlying cognitive and neural mechanisms. In this study, we investigate hearing in bats and how it shapes bat species coexistence. We present the hearing thresholds and echolocation calls of 12 different gleaning bats from the ecologically diverse Phyllostomid family. We measured their auditory brainstem responses to assess their hearing sensitivity. The audiograms of these species had similar overall shapes but differed substantially for frequencies below 9 kHz and in the frequency range of their echolocation calls. Our results suggest that differences among bats in hearing abilities contribute to the diversity in foraging strategies of gleaning bats. We argue that differences in auditory sensitivity could be important mechanisms shaping diversity in sensory niches and coexistence of species.
  • Gerrits, F., Senft, G., & Wisse, D. (2018). Bomiyoyeva and bomduvadoya: Two rare structures on the Trobriand Islands exclusively reserved for Tabalu chiefs. Anthropos, 113, 93-113. doi:10.5771/0257-9774-2018-1-93.

    Abstract

    This article presents information about two so far undescribed buildings made by the Trobriand Islanders, the bomiyoyeva and the bomduvadova. These structures are connected to the highest-ranking chiefs living in Labai and Omarakana on Kiriwina Island. They highlight the power and eminence of these chiefs. After a brief report on the history of this project, the structure of the two houses, their function, and their use is described and information on their construction and their mythical background is provided. Finally, everyday as well as ritual, social, and political functions of both buildings are discussed. [Melanesia, Trobriand Islands, Tabalu chiefs, yams houses, bomiyoyeva, bomduvadova, authoritative capacities]

    Additional information

    link to journal
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Honbolygó, F., Tóth, D., Csépe, V., Huguet, H., Chaix, Y., Iannuzzi, S., Demonet, J.-F., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C. and 29 moreGialluisi, A., Andlauer, T. F. M., Mirza-Schreiber, N., Moll, K., Becker, J., Hoffmann, P., Ludwig, K. U., Czamara, D., St Pourcain, B., Honbolygó, F., Tóth, D., Csépe, V., Huguet, H., Chaix, Y., Iannuzzi, S., Demonet, J.-F., Morris, A. P., Hulslander, J., Willcutt, E. G., DeFries, J. C., Olson, R. K., Smith, S. D., Pennington, B. F., Vaessen, A., Maurer, U., Lyytinen, H., Peyrard-Janvid, M., Leppänen, P. H. T., Brandeis, D., Bonte, M., Stein, J. F., Talcott, J. B., Fauchereau, F., Wilcke, A., Kirsten, H., Müller, B., Francks, C., Bourgeron, T., Monaco, A. P., Ramus, F., Landerl, K., Kere, J., Scerri, T. S., Paracchini, S., Fisher, S. E., Schumacher, J., Nöthen, M. M., Müller-Myhsok, B., & Schulte-Körne, G. (2021). Genome-wide association study reveals new insights into the heritability and genetic correlates of developmental dyslexia. Molecular Psychiatry, 26, 3004-3017. doi:10.1038/s41380-020-00898-x.

    Abstract

    Developmental dyslexia (DD) is a learning disorder affecting the ability to read, with a heritability of 40–60%. A notable part of this heritability remains unexplained, and large genetic studies are warranted to identify new susceptibility genes and clarify the genetic bases of dyslexia. We carried out a genome-wide association study (GWAS) on 2274 dyslexia cases and 6272 controls, testing associations at the single variant, gene, and pathway level, and estimating heritability using single-nucleotide polymorphism (SNP) data. We also calculated polygenic scores (PGSs) based on large-scale GWAS data for different neuropsychiatric disorders and cortical brain measures, educational attainment, and fluid intelligence, testing them for association with dyslexia status in our sample. We observed statistically significant (p  < 2.8 × 10−6) enrichment of associations at the gene level, for LOC388780 (20p13; uncharacterized gene), and for VEPH1 (3q25), a gene implicated in brain development. We estimated an SNP-based heritability of 20–25% for DD, and observed significant associations of dyslexia risk with PGSs for attention deficit hyperactivity disorder (at pT = 0.05 in the training GWAS: OR = 1.23[1.16; 1.30] per standard deviation increase; p  = 8 × 10−13), bipolar disorder (1.53[1.44; 1.63]; p = 1 × 10−43), schizophrenia (1.36[1.28; 1.45]; p = 4 × 10−22), psychiatric cross-disorder susceptibility (1.23[1.16; 1.30]; p = 3 × 10−12), cortical thickness of the transverse temporal gyrus (0.90[0.86; 0.96]; p = 5 × 10−4), educational attainment (0.86[0.82; 0.91]; p = 2 × 10−7), and intelligence (0.72[0.68; 0.76]; p = 9 × 10−29). This study suggests an important contribution of common genetic variants to dyslexia risk, and novel genomic overlaps with psychiatric conditions like bipolar disorder, schizophrenia, and cross-disorder susceptibility. Moreover, it revealed the presence of shared genetic foundations with a neural correlate previously implicated in dyslexia by neuroimaging evidence.
  • Gialluisi, A., Guadalupe, T., Francks, C., & Fisher, S. E. (2017). Neuroimaging genetic analyses of novel candidate genes associated with reading and language. Brain and Language, 172, 9-15. doi:10.1016/j.bandl.2016.07.002.

    Abstract

    Neuroimaging measures provide useful endophenotypes for tracing genetic effects on reading and language. A recent Genome-Wide Association Scan Meta-Analysis (GWASMA) of reading and language skills (N = 1862) identified strongest associations with the genes CCDC136/FLNC and RBFOX2. Here, we follow up the top findings from this GWASMA, through neuroimaging genetics in an independent sample of 1275 healthy adults. To minimize multiple-testing, we used a multivariate approach, focusing on cortical regions consistently implicated in prior literature on developmental dyslexia and language impairment. Specifically, we investigated grey matter surface area and thickness of five regions selected a priori: middle temporal gyrus (MTG); pars opercularis and pars triangularis in the inferior frontal gyrus (IFG-PO and IFG-PT); postcentral parietal gyrus (PPG) and superior temporal gyrus (STG). First, we analysed the top associated polymorphisms from the reading/language GWASMA: rs59197085 (CCDC136/FLNC) and rs5995177 (RBFOX2). There was significant multivariate association of rs5995177 with cortical thickness, driven by effects on left PPG, right MTG, right IFG (both PO and PT), and STG bilaterally. The minor allele, previously associated with reduced reading-language performance, showed negative effects on grey matter thickness. Next, we performed exploratory gene-wide analysis of CCDC136/FLNC and RBFOX2; no other associations surpassed significance thresholds. RBFOX2 encodes an important neuronal regulator of alternative splicing. Thus, the prior reported association of rs5995177 with reading/language performance could potentially be mediated by reduced thickness in associated cortical regions. In future, this hypothesis could be tested using sufficiently large samples containing both neuroimaging data and quantitative reading/language scores from the same individuals.

    Additional information

    mmc1.docx
  • Gisladottir, R. S., Bögels, S., & Levinson, S. C. (2018). Oscillatory brain responses reflect anticipation during comprehension of speech acts in spoken dialogue. Frontiers in Human Neuroscience, 12: 34. doi:10.3389/fnhum.2018.00034.

    Abstract

    Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialogue. Participants listened to short, spoken dialogues with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

    Additional information

    data sheet 1.pdf
  • Gisselgard, J., Petersson, K. M., & Ingvar, M. (2004). The irrelevant speech effect and working memory load. NeuroImage, 22, 1107-1116. doi:10.1016/j.neuroimage.2004.02.031.

    Abstract

    Irrelevant speech impairs the immediate serial recall of visually presented material. Previously, we have shown that the irrelevant speech effect (ISE) was associated with a relative decrease of regional blood flow in cortical regions subserving the verbal working memory, in particular the superior temporal cortex. In this extension of the previous study, the working memory load was increased and an increased activity as a response to irrelevant speech was noted in the dorsolateral prefrontal cortex. We suggest that the two studies together provide some basic insights as to the nature of the irrelevant speech effect. Firstly, no area in the brain can be ascribed as the single locus of the irrelevant speech effect. Instead, the functional neuroanatomical substrate to the effect can be characterized in terms of changes in networks of functionally interrelated areas. Secondly, the areas that are sensitive to the irrelevant speech effect are also generically activated by the verbal working memory task itself. Finally, the impact of irrelevant speech and related brain activity depends on working memory load as indicated by the differences between the present and the previous study. From a brain perspective, the irrelevant speech effect may represent a complex phenomenon that is a composite of several underlying mechanisms, which depending on the working memory load, include top-down inhibition as well as recruitment of compensatory support and control processes. We suggest that, in the low-load condition, a selection process by an inhibitory top-down modulation is sufficient, whereas in the high-load condition, at or above working memory span, auxiliary adaptive cognitive resources are recruited as compensation
  • Goldin-Meadow, S., Chee So, W., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the USA, 105(27), 9163-9168. doi:10.1073/pnas.0710060105.

    Abstract

    To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

    Additional information

    GoldinMeadow_2008_naturalSuppl.pdf
  • Gonzalez da Silva, C., Petersson, K. M., Faísca, L., Ingvar, M., & Reis, A. (2004). The effects of literacy and education on the quantitative and qualitative aspects of semantic verbal fluency. Journal of Clinical and Experimental Neuropsychology, 26(2), 266-277. doi:10.1076/jcen.26.2.266.28089.

    Abstract

    Semantic verbal fluency tasks are commonly used in neuropsychological assessment. Investigations of the influence of level of literacy have not yielded consistent results in the literature. This prompted us to investigate the ecological relevance of task specifics, in particular, the choice of semantic criteria used. Two groups of literate and illiterate subjects were compared on two verbal fluency tasks using different semantic criteria. The performance on a food criterion (supermarket fluency task), considered more ecologically relevant for the two literacy groups, and an animal criterion (animal fluency task) were compared. The data were analysed using both quantitative and qualitative measures. The quantitative analysis indicated that the two literacy groups performed equally well on the supermarket fluency task. In contrast, results differed significantly during the animal fluency task. The qualitative analyses indicated differences between groups related to the strategies used, especially with respect to the animal fluency task. The overall results suggest that there is not a substantial difference between literate and illiterate subjects related to the fundamental workings of semantic memory. However, there is indication that the content of semantic memory reflects differences in shared cultural background - in other words, formal education –, as indicated by the significant interaction between level of literacy and semantic criterion.
  • Goodhew, S. C., & Kidd, E. (2017). Language use statistics and prototypical grapheme colours predict synaesthetes' and non-synaesthetes' word-colour associations. Acta Psychologica, 173, 73-86. doi:10.1016/j.actpsy.2016.12.008.

    Abstract

    Synaesthesia is the neuropsychological phenomenon in which individuals experience unusual sensory associations, such as experiencing particular colours in response to particular words. While it was once thought the particular pairings between stimuli were arbitrary and idiosyncratic to particular synaesthetes, there is now growing evidence for a systematic psycholinguistic basis to the associations. Here we sought to assess the explanatory value of quantifiable lexical association measures (via latent semantic analysis; LSA) in the pairings observed between words and colours in synaesthesia. To test this, we had synaesthetes report the particular colours they experienced in response to given concept words, and found that language association between the concept and colour words provided highly reliable predictors of the reported pairings. These results provide convergent evidence for a psycholinguistic basis to synaesthesia, but in a novel way, showing that exposure to particular patterns of associations in language can predict the formation of particular synaesthetic lexical-colour associations. Consistent with previous research, the prototypical synaesthetic colour for the first letter of the word also played a role in shaping the colour for the whole word, and this effect also interacted with language association, such that the effect of the colour for the first letter was stronger as the association between the concept word and the colour word in language increased. Moreover, when a group of non-synaesthetes were asked what colours they associated with the concept words, they produced very similar reports to the synaesthetes that were predicted by both language association and prototypical synaesthetic colour for the first letter of the word. This points to a shared linguistic experience generating the associations for both groups.
  • Gordon, R. L., Ravignani, A., Hyland Bruno, J., Robinson, C. M., Scartozzi, A., Embalabala, R., Niarchou, M., 23andMe Research Team, Cox, N. J., & Creanza, N. (2021). Linking the genomic signatures of human beat synchronization and learned song in birds. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200329. doi:10.1098/rstb.2020.0329.

    Abstract

    The development of rhythmicity is foundational to communicative and social behaviours in humans and many other species, and mechanisms of synchrony could be conserved across species. The goal of the current paper is to explore evolutionary hypotheses linking vocal learning and beat synchronization through genomic approaches, testing the prediction that genetic underpinnings of birdsong also contribute to the aetiology of human interactions with musical beat structure. We combined state-of-the-art-genomic datasets that account for underlying polygenicity of these traits: birdsong genome-wide transcriptomics linked to singing in zebra finches, and a human genome-wide association study of beat synchronization. Results of competitive gene set analysis revealed that the genetic architecture of human beat synchronization is significantly enriched for birdsong genes expressed in songbird Area X (a key nucleus for vocal learning, and homologous to human basal ganglia). These findings complement ethological and neural evidence of the relationship between vocal learning and beat synchronization, supporting a framework of some degree of common genomic substrates underlying rhythm-related behaviours in two clades, humans and songbirds (the largest evolutionary radiation of vocal learners). Future cross-species approaches investigating the genetic underpinnings of beat synchronization in a broad evolutionary context are discussed.

    Additional information

    analysis scripts and variables
  • Goriot, C., Unsworth, S., Van Hout, R. W. N. M., Broersma, M., & McQueen, J. M. (2021). Differences in phonological awareness performance: Are there positive or negative effects of bilingual experience? Linguistic Approaches to Bilingualism, 11(3), 425-460. doi:10.1075/lab.18082.gor.

    Abstract

    Children who have knowledge of two languages may show better phonological awareness than their monolingual peers (e.g. Bruck & Genesee, 1995). It remains unclear how much bilingual experience is needed for such advantages to appear, and whether differences in language or cognitive skills alter the relation between bilingualism and phonological awareness. These questions were investigated in this cross-sectional study. Participants (n = 294; 4–7 year-olds, in the first three grades of primary school) were Dutch-speaking pupils attending mainstream monolingual Dutch primary schools or early-English schools providing English lessons from grade 1, and simultaneous Dutch-English bilinguals. We investigated phonological awareness (rhyming, phoneme blending, onset phoneme identification, and phoneme deletion) and its relation to age, Dutch vocabulary, English vocabulary, working memory and short-term memory, and the balance between Dutch and English vocabulary. Small significant (α < .05) effects of bilingualism were found on onset phoneme identification and phoneme deletion, but post-hoc comparisons revealed no robust pairwise differences between the groups. Furthermore, effects of bilingualism sometimes disappeared when differences in language or memory skills were taken into account. Learning two languages simultaneously is not beneficial to – and importantly, also not detrimental to – phonological awareness.

    Files private

    Request files
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
  • Goriot, C., Van Hout, R., Broersma, M., Lobo, V., McQueen, J. M., & Unsworth, S. (2021). Using the peabody picture vocabulary test in L2 children and adolescents: Effects of L1. International Journal of Bilingual Education and Bilingualism, 24(4), 546-568. doi:10.1080/13670050.2018.1494131.

    Abstract

    This study investigated to what extent the Peabody Picture Vocabulary Test
    (PPVT-4) is a reliable tool for measuring vocabulary knowledge of English as
    a second language (L2), and to what extent L1 characteristics affect test
    outcomes. The PPVT-4 was administered to Dutch pupils in six different
    age groups (4-15 years old) who were or were not following an English
    educational programme at school. Our first finding was that the PPVT-4
    was not a reliable measure for pupils who were correct on maximally 24
    items, but it was reliable for pupils who performed better. Second, both
    primary-school and secondary-school pupils performed better on items
    for which the phonological similarity between the English word and its
    Dutch translation was higher. Third, young unexperienced L2 learners’
    scores were predicted by Dutch lexical frequency, while older more
    experienced pupils’ scores were predicted by English frequency. These
    findings indicate that the PPVT may be inappropriate for use with L2
    learners with limited L2 proficiency. Furthermore, comparisons of PPVT
    scores across learners with different L1s are confounded by effects of L1
    frequency and L1-L2 similarity. The PPVT-4 is however a suitable measure
    to compare more proficient L2 learners who have the same L1.
  • Goudbeek, M., Cutler, A., & Smits, R. (2008). Supervised and unsupervised learning of multidimensionally varying nonnative speech categories. Speech Communication, 50(2), 109-125. doi:10.1016/j.specom.2007.07.003.

    Abstract

    The acquisition of novel phonetic categories is hypothesized to be affected by the distributional properties of the input, the relation of the new categories to the native phonology, and the availability of supervision (feedback). These factors were examined in four experiments in which listeners were presented with novel categories based on vowels of Dutch. Distribution was varied such that the categorization depended on the single dimension duration, the single dimension frequency, or both dimensions at once. Listeners were clearly sensitive to the distributional information, but unidimensional contrasts proved easier to learn than multidimensional. The native phonology was varied by comparing Spanish versus American English listeners. Spanish listeners found categorization by frequency easier than categorization by duration, but this was not true of American listeners, whose native vowel system makes more use of duration-based distinctions. Finally, feedback was either available or not; this comparison showed supervised learning to be significantly superior to unsupervised learning.
  • De Graaf, T. A., Duecker, F., Stankevich, Y., Ten Oever, S., & Sack, A. T. (2017). Seeing in the dark: Phosphene thresholds with eyes open versus closed in the absence of visual inputs. Brain Stimulation, 10(4), 828-835. doi:10.1016/j.brs.2017.04.127.

    Abstract

    Background: Voluntarily opening or closing our eyes results in fundamentally different input patterns and expectancies. Yet it remains unclear how our brains and visual systems adapt to these ocular states.
    Objective/Hypothesis: We here used transcranial magnetic stimulation (TMS) to probe the excitability of the human visual system with eyes open or closed, in the complete absence of visual inputs.
    Methods: Combining Bayesian staircase procedures with computer control of TMS pulse intensity allowed interleaved determination of phosphene thresholds (PT) in both conditions. We measured parieto-occipital EEG baseline activity in several stages to track oscillatory power in the alpha (8-12 Hz) frequency-band, which has previously been shown to be inversely related to phosphene perception.
    Results: Since closing the eyes generally increases alpha power, one might have expected a decrease in excitability (higher PT). While we confirmed a rise in alpha power with eyes closed, visual excitability was actually increased (PT was lower) with eyes closed.
    Conclusions: This suggests that, aside from oscillatory alpha power, additional neuronal mechanisms influence the excitability of early visual cortex. One of these may involve a more internally oriented mode of brain operation, engaged by closing the eyes. In this state, visual cortex may be more susceptible to top-down inputs, to facilitate for example multisensory integration or imagery/working memory, although alternative explanations remain possible. (C) 2017 Elsevier Inc. All rights reserved.

    Additional information

    Supplementary data
  • Grabe, E. (1998). Comparative intonational phonology: English and German. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057683.
  • Grabot, L., Kösem, A., Azizi, L., & Van Wassenhove, V. (2017). Prestimulus Alpha Oscillations and the Temporal Sequencing of Audio-visual Events. Journal of Cognitive Neuroscience, 29(9), 1566-1582. doi:10.1162/jocn_a_01145.

    Abstract

    Perceiving the temporal order of sensory events typically depends on participants' attentional state, thus likely on the endogenous fluctuations of brain activity. Using magnetoencephalography, we sought to determine whether spontaneous brain oscillations could disambiguate the perceived order of auditory and visual events presented in close temporal proximity, that is, at the individual's perceptual order threshold (Point of Subjective Simultaneity [PSS]). Two neural responses were found to index an individual's temporal order perception when contrasting brain activity as a function of perceived order (i.e., perceiving the sound first vs. perceiving the visual event first) given the same physical audiovisual sequence. First, average differences in prestimulus auditory alpha power indicated perceiving the correct ordering of audiovisual events irrespective of which sensory modality came first: a relatively low alpha power indicated perceiving auditory or visual first as a function of the actual sequence order. Additionally, the relative changes in the amplitude of the auditory (but not visual) evoked responses were correlated with participant's correct performance. Crucially, the sign of the magnitude difference in prestimulus alpha power and evoked responses between perceived audiovisual orders correlated with an individual's PSS. Taken together, our results suggest that spontaneous oscillatory activity cannot disambiguate subjective temporal order without prior knowledge of the individual's bias toward perceiving one or the other sensory modality first. Altogether, our results suggest that, under high perceptual uncertainty, the magnitude of prestimulus alpha (de)synchronization indicates the amount of compensation needed to overcome an individual's prior in the serial ordering and temporal sequencing of information
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Greenfield, M. D., Honing, H., Kotz, S. A., & Ravignani, A. (Eds.). (2021). Synchrony and rhythm interaction: From the brain to behavioural ecology [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376.
  • Greenfield, M. D., Honing, H., Kotz, S. A., & Ravignani, A. (2021). Synchrony and rhythm interaction: From the brain to behavioural ecology. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200324. doi:10.1098/rstb.2020.0324.

    Abstract

    This theme issue assembles current studies that ask how and why precise synchronization and related forms of rhythm interaction are expressed in a wide range of behaviour. The studies cover human activity, with an emphasis on music, and social behaviour, reproduction and communication in non-human animals. In most cases, the temporally aligned rhythms have short—from several seconds down to a fraction of a second—periods and are regulated by central nervous system pacemakers, but interactions involving rhythms that are 24 h or longer and originate in biological clocks also occur. Across this spectrum of activities, species and time scales, empirical work and modelling suggest that synchrony arises from a limited number of coupled-oscillator mechanisms with which individuals mutually entrain. Phylogenetic distribution of these common mechanisms points towards convergent evolution. Studies of animal communication indicate that many synchronous interactions between the signals of neighbouring individuals are specifically favoured by selection. However, synchronous displays are often emergent properties of entrainment between signalling individuals, and in some situations, the very signallers who produce a display might not gain any benefit from the collective timing of their production.
  • Greenfield, P. M., Slobin, D., Cole, M., Gardner, H., Sylva, K., Levelt, W. J. M., Lucariello, J., Kay, A., Amsterdam, A., & Shore, B. (2017). Remembering Jerome Bruner: A series of tributes to Jerome “Jerry” Bruner, who died in 2016 at the age of 100, reflects the seminal contributions that led him to be known as a co-founder of the cognitive revolution. Observer, 30(2). Retrieved from http://www.psychologicalscience.org/observer/remembering-jerome-bruner.

    Abstract

    Jerome Seymour “Jerry” Bruner was born on October 1, 1915, in New York City. He began his academic career as psychology professor at Harvard University; he ended it as University Professor Emeritus at New York University (NYU) Law School. What happened at both ends and in between is the subject of the richly variegated remembrances that follow. On June 5, 2016, Bruner died in his Greenwich Village loft at age 100. He leaves behind his beloved partner Eleanor Fox, who was also his distinguished colleague at NYU Law School; his son Whitley; his daughter Jenny; and three grandchildren.

    Bruner’s interdisciplinarity and internationalism are seen in the remarkable variety of disciplines and geographical locations represented in the following tributes. The reader will find developmental psychology, anthropology, computer science, psycholinguistics, cognitive psychology, cultural psychology, education, and law represented; geographically speaking, the writers are located in the United States, Canada, the United Kingdom, and the Netherlands. The memories that follow are arranged in roughly chronological order according to when the writers had their first contact with Jerry Bruner.
  • Greenhill, S. J., Wu, C.-H., Hua, X., Dunn, M., Levinson, S. C., & Gray, R. D. (2017). Evolutionary dynamics of language systems. Proceedings of the National Academy of Sciences of the United States of America, 114(42), E8822-E8829. doi:10.1073/pnas.1700388114.

    Abstract

    Understanding how and why language subsystems differ in their evolutionary dynamics is a fundamental question for historical and comparative linguistics. One key dynamic is the rate of language change. While it is commonly thought that the rapid rate of change hampers the reconstruction of deep language relationships beyond 6,000–10,000 y, there are suggestions that grammatical structures might retain more signal over time than other subsystems, such as basic vocabulary. In this study, we use a Dirichlet process mixture model to infer the rates of change in lexical and grammatical data from 81 Austronesian languages. We show that, on average, most grammatical features actually change faster than items of basic vocabulary. The grammatical data show less schismogenesis, higher rates of homoplasy, and more bursts of contact-induced change than the basic vocabulary data. However, there is a core of grammatical and lexical features that are highly stable. These findings suggest that different subsystems of language have differing dynamics and that careful, nuanced models of language change will be needed to extract deeper signal from the noise of parallel evolution, areal readaptation, and contact.
  • De Gregorio, C., Valente, D., Raimondi, T., Torti, V., Miaretsoa, L., Friard, O., Giacoma, C., Ravignani, A., & Gamba, M. (2021). Categorical rhythms in a singing primate. Current Biology, 31, R1363-R1380. doi:10.1016/j.cub.2021.09.032.

    Abstract

    What are the origins of musical rhythm? One approach to the biology and evolution of music consists in finding common musical traits across species. These similarities allow biomusicologists to infer when and how musical traits appeared in our species1
    . A parallel approach to the biology and evolution of music focuses on finding statistical universals in human music2
    . These include rhythmic features that appear above chance across musical cultures. One such universal is the production of categorical rhythms3
    , defined as those where temporal intervals between note onsets are distributed categorically rather than uniformly2
    ,4
    ,5
    . Prominent rhythm categories include those with intervals related by small integer ratios, such as 1:1 (isochrony) and 1:2, which translates as some notes being twice as long as their adjacent ones. In humans, universals are often defined in relation to the beat, a top-down cognitive process of inferring a temporal regularity from a complex musical scene1
    . Without assuming the presence of the beat in other animals, one can still investigate its downstream products, namely rhythmic categories with small integer ratios detected in recorded signals. Here we combine the comparative and statistical universals approaches, testing the hypothesis that rhythmic categories and small integer ratios should appear in species showing coordinated group singing3
    . We find that a lemur species displays, in its coordinated songs, the isochronous and 1:2 rhythm categories seen in human music, showing that such categories are not, among mammals, unique to humans3

    Additional information

    supplemental information
  • Gretsch, P. (2004). What does finiteness mean to children? A cross-linguistic perspective onroot infinitives. Linguistics, 42(2), 419-468. doi:10.1515/ling.2004.014.

    Abstract

    The discussion on root infinitives has mainly centered around their supposed modal usage. This article aims at modelling the form-function relation of the root infinitive phenomenon by taking into account the full range of interpretational facets encountered cross-linguistically and interindividually. Following the idea of a subsequent ‘‘cell partitioning’’ in the emergence of form-function correlations, I claim that it is the major fission between [+-finite] which is central to express temporal reference different from the default here&now in tense-oriented languages. In aspectual-oriented languages, a similar opposition is mastered with the marking of early aspectual forms. It is observed that in tense-oriented languages like Dutch and German, the progression of functions associated with the infinitival form proceeds from nonmodal to modal, whereas the reverse progression holds for the Russian infinitive. Based on this crucial observation, a model of acquisition is proposed which allows for a flexible and systematic relationship between morphological forms and their respective interpretational biases dependent on their developmental context. As for early child language, I argue that children entertain only two temporal parameters: one parameter is fixed to the here&now point in time, and a second parameter relates to the time talked about, the topic time; this latter time overlaps the situation time as long as no empirical evidence exists to support the emergence of a proper distinction between tense and aspect.

    Files private

    Request files
  • Grieco-Calub, T. M., Ward, K. M., & Brehm, L. (2017). Multitasking During Degraded Speech Recognition in School-Age Children. Trends in hearing, 21, 1-14. doi:10.1177/2331216516686786.

    Abstract

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unpro- cessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Groen, I. I. A., Jahfari, S., Seijdel, N., Ghebreab, S., Lamme, V. A. F., & Scholte, H. S. (2018). Scene complexity modulates degree of feedback activity during object detection in natural scenes. PLoS Computational Biology, 14: e1006690. doi:10.1371/journal.pcbi.1006690.

    Abstract

    Selective brain responses to objects arise within a few hundreds of milliseconds of neural processing, suggesting that visual object recognition is mediated by rapid feed-forward activations. Yet disruption of neural responses in early visual cortex beyond feed-forward processing stages affects object recognition performance. Here, we unite these discrepant findings by reporting that object recognition involves enhanced feedback activity (recurrent processing within early visual cortex) when target objects are embedded in natural scenes that are characterized by high complexity. Human participants performed an animal target detection task on natural scenes with low, medium or high complexity as determined by a computational model of low-level contrast statistics. Three converging lines of evidence indicate that feedback was selectively enhanced for high complexity scenes. First, functional magnetic resonance imaging (fMRI) activity in early visual cortex (V1) was enhanced for target objects in scenes with high, but not low or medium complexity. Second, event-related potentials (ERPs) evoked by target objects were selectively enhanced at feedback stages of visual processing (from ~220 ms onwards) for high complexity scenes only. Third, behavioral performance for high complexity scenes deteriorated when participants were pressed for time and thus less able to incorporate the feedback activity. Modeling of the reaction time distributions using drift diffusion revealed that object information accumulated more slowly for high complexity scenes, with evidence accumulation being coupled to trial-to-trial variation in the EEG feedback response. Together, these results suggest that while feed-forward activity may suffice to recognize isolated objects, the brain employs recurrent processing more adaptively in naturalistic settings, using minimal feedback for simple scenes and increasing feedback for complex scenes.

    Additional information

    data via OSF
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2017). Language-induced visual and semantic biases in visual search are subject to task requirements. Visual Cognition, 25, 225-240. doi:10.1080/13506285.2017.1324934.

    Abstract

    Visual attention is biased by both visual and semantic representations activated by words. We investigated to what extent language-induced visual and semantic biases are subject to task demands. Participants memorized a spoken word for a verbal recognition task, and performed a visual search task during the retention period. Crucially, while the word had to be remembered in all conditions, it was either relevant for the search (as it also indicated the target) or irrelevant (as it only served the memory test afterwards). On critical trials, displays contained objects that were visually or semantically related to the memorized word. When the word was relevant for the search, eye movement biases towards visually related objects arose earlier and more strongly than biases towards semantically related objects. When the word was irrelevant, there was still evidence for visual and semantic biases, but these biases were substantially weaker, and similar in strength and temporal dynamics, without a visual advantage. We conclude that language-induced attentional biases are subject to task requirements.
  • Groszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P. and 1 moreGroszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P., & Fisher, S. E. (2008). Impaired synaptic plasticity and motor learning in mice with a point mutation implicated in human speech deficits. Current Biology, 18(5), 354-362. doi:10.1016/j.cub.2008.01.060.

    Abstract

    The most well-described example of an inherited speech and language disorder is that observed in the multigenerational KE family, caused by a heterozygous missense mutation in the FOXP2 gene. Affected individuals are characterized by deficits in the learning and production of complex orofacial motor sequences underlying fluent speech and display impaired linguistic processing for both spoken and written language. The FOXP2 transcription factor is highly similar in many vertebrate species, with conserved expression in neural circuits related to sensorimotor integration and motor learning. In this study, we generated mice carrying an identical point mutation to that of the KE family, yielding the equivalent arginine-to-histidine substitution in the Foxp2 DNA-binding domain. Homozygous R552H mice show severe reductions in cerebellar growth and postnatal weight gain but are able to produce complex innate ultrasonic vocalizations. Heterozygous R552H mice are overtly normal in brain structure and development. Crucially, although their baseline motor abilities appear to be identical to wild-type littermates, R552H heterozygotes display significant deficits in species-typical motor-skill learning, accompanied by abnormal synaptic plasticity in striatal and cerebellar neural circuits.

    Additional information

    mmc1.pdf
  • Guadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T. and 141 moreGuadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T., Blangero, J., Bokde, A. L., Boedhoe, P. S., Bose, A., Brem, S., Brodaty, H., Bromberg, U., Brooks, S., Büchel, C., Buitelaar, J., Calhoun, V. D., Cannon, D. M., Cattrell, A., Cheng, Y., Conrod, P. J., Conzelmann, A., Corvin, A., Crespo-Facorro, B., Crivello, F., Dannlowski, U., De Zubicaray, G. I., De Zwarte, S. M., Deary, I. J., Desrivières, S., Doan, N. T., Donohoe, G., Dørum, E. S., Ehrlich, S., Espeseth, T., Fernández, G., Flor, H., Fouche, J.-P., Frouin, V., Fukunaga, M., Gallinat, J., Garavan, H., Gill, M., Suarez, A. G., Gowland, P., Grabe, H. J., Grotegerd, D., Gruber, O., Hagenaars, S., Hashimoto, R., Hauser, T. U., Heinz, A., Hibar, D. P., Hoekstra, P. J., Hoogman, M., Howells, F. M., Hu, H., Hulshoff Pol, H. E.., Huyser, C., Ittermann, B., Jahanshad, N., Jönsson, E. G., Jurk, S., Kahn, R. S., Kelly, S., Kraemer, B., Kugel, H., Kwon, J. S., Lemaitre, H., Lesch, K.-P., Lochner, C., Luciano, M., Marquand, A. F., Martin, N. G., Martínez-Zalacaín, I., Martinot, J.-L., Mataix-Cols, D., Mather, K., McDonald, C., McMahon, K. L., Medland, S. E., Menchón, J. M., Morris, D. W., Mothersill, O., Maniega, S. M., Mwangi, B., Nakamae, T., Nakao, T., Narayanaswaamy, J. C., Nees, F., Nordvik, J. E., Onnink, A. M. H., Opel, N., Ophoff, R., Martinot, M.-L.-P., Orfanos, D. P., Pauli, P., Paus, T., Poustka, L., Reddy, J. Y., Renteria, M. E., Roiz-Santiáñez, R., Roos, A., Royle, N. A., Sachdev, P., Sánchez-Juan, P., Schmaal, L., Schumann, G., Shumskaya, E., Smolka, M. N., Soares, J. C., Soriano-Mas, C., Stein, D. J., Strike, L. T., Toro, R., Turner, J. A., Tzourio-Mazoyer, N., Uhlmann, A., Valdés Hernández, M., Van den Heuvel, O. A., Van der Meer, D., Van Haren, N. E.., Veltman, D. J., Venkatasubramanian, G., Vetter, N. C., Vuletic, D., Walitza, S., Walter, H., Walton, E., Wang, Z., Wardlaw, J., Wen, W., Westlye, L. T., Whelan, R., Wittfeld, K., Wolfers, T., Wright, M. J., Xu, J., Xu, X., Yun, J.-Y., Zhao, J., Franke, B., Thompson, P. M., Glahn, D. C., Mazoyer, B., Fisher, S. E., & Francks, C. (2017). Human subcortical asymmetries in 15,847 people worldwide reveal effects of age and sex. Brain Imaging and Behavior, 11(5), 1497-1514. doi:10.1007/s11682-016-9629-z.

    Abstract

    The two hemispheres of the human brain differ functionally and structurally. Despite over a century of research, the extent to which brain asymmetry is influenced by sex, handedness, age, and genetic factors is still controversial. Here we present the largest ever analysis of subcortical brain asymmetries, in a harmonized multi-site study using meta-analysis methods. Volumetric asymmetry of seven subcortical structures was assessed in 15,847 MRI scans from 52 datasets worldwide. There were sex differences in the asymmetry of the globus pallidus and putamen. Heritability estimates, derived from 1170 subjects belonging to 71 extended pedigrees, revealed that additive genetic factors influenced the asymmetry of these two structures and that of the hippocampus and thalamus. Handedness had no detectable effect on subcortical asymmetries, even in this unprecedented sample size, but the asymmetry of the putamen varied with age. Genetic drivers of asymmetry in the hippocampus, thalamus and basal ganglia may affect variability in human cognition, including susceptibility to psychiatric disorders.

    Additional information

    11682_2016_9629_MOESM1_ESM.pdf
  • Guadalupe, T. (2017). The biology of variation in anatomical brain asymmetries. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Le Guen, O. (2008). Ubèel pixan: El camino de las almas ancetros familiares y colectivos entre los Mayas Yacatecos. Penisula, 3(1), 83-120. Retrieved from http://www.revistas.unam.mx/index.php/peninsula/article/viewFile/44354/40086.

    Abstract

    The aim of this article is to analyze the funerary customs and ritual for the souls among contemporary Yucatec Maya in order to better understand their relations with pre-Hispanic burial patterns. It is suggested that the souls of the dead are considered as ancestors that can be distinguished between family and collective ancestors considering several criteria: the place of burial, the place of ritual performance and the ritual treatment. In this proposition, funerary practices as well as ritual categories of ancestors (family or collective), are considered as reminiscences of ancient practices whose traces can be found throughout historical sources. Through an analyze of the current funerary practices and their variations, this article aims to demonstrate that over the time and despite socio-economical changes, ancient funerary practices (specifically from the post-classic period) had kept some homogeneity, preserving some essential characteristics that can be observed in the actuality.
  • Guerrero, L., & Van Valin Jr., R. D. (2004). Yaqui and the analysis of primary object languages. International Journal of American Linguistics, 70(3), 290-319. doi:10.1086/425603.

    Abstract

    The central topic of this study is to investigate three- and four-place predicate in Yaqui, which are characterized by having multiple object arguments. As with other Southern Uto-Aztecan languages, it has been said that Yaqui follows the Primary/Secondary Object pattern (Dryer 1986). Actually, Yaqui presents three patterns: verbs like nenka ‘sell’ follow the direct–indirect object pattern, verbs like miika ‘give’ follow the primary object pattern, and verbs like chijakta ‘sprinkle’ follow the locative alternation pattern; the primary object pattern is the exclusive one found with derived verbs. This paper shows that the contrast between direct object and primary object languages is not absolute but rather one of degree, and hence two “object” selection principles are needed to explain this mixed system. The two principles are not limited to Yaqui but are found in other languages as well, including English.
  • Guest, O., & Martin, A. E. (2021). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science, 16(4), 789-802. doi:10.1177/1745691620970585.

    Abstract

    Psychology endeavors to develop theories of human capacities and behaviors on the basis of a variety of methodologies and dependent measures. We argue that one of the most divisive factors in psychological science is whether researchers choose to use computational modeling of theories (over and above data) during the scientific-inference process. Modeling is undervalued yet holds promise for advancing psychological science. The inherent demands of computational modeling guide us toward better science by forcing us to conceptually analyze, specify, and formalize intuitions that otherwise remain unexamined—what we dub open theory. Constraining our inference process through modeling enables us to build explanatory and predictive theories. Here, we present scientific inference in psychology as a path function in which each step shapes the next. Computational modeling can constrain these steps, thus advancing scientific inference over and above the stewardship of experimental practice (e.g., preregistration). If psychology continues to eschew computational modeling, we predict more replicability crises and persistent failure at coherent theory building. This is because without formal modeling we lack open and transparent theorizing. We also explain how to formalize, specify, and implement a computational model, emphasizing that the advantages of modeling can be achieved by anyone with benefit to all.
  • Guest, O., & Love, B. C. (2017). What the success of brain imaging implies about the neural code. eLife, 6: e21397. doi:10.7554/eLife.21397.

    Abstract

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI’s limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? Language Learning, 58(suppl. 1), 207-216. doi:10.1111/j.1467-9922.2008.00472.x.

Share this page